Pretty wild night with LLMs. You don't need Gitlab Copilot, there is offline running solutions ready: https://continue.dev/ - alternative to GPT4ALL https://ollama.ai/ - check the models! and heartwarming full-c++ implementation of LLaMA : LLaMA.cpp https://github.com/ggerganov/llama.cpp (works on OpenCL and Radeon GPUs). #llm #largelanguagemodel #largelanguagemodels #llama #llama2
