@hendrik considering I am a complete #dart newbie I think it did pretty well. Even when it messed up it was able to auto iterate until make passed without me having to keep re-prompting.

I'm still using #ellama for other interactions like for example reviewing a patch series before posting. This #eca workflow is really tuned for the edit/compile/test cycle of writing new code.

The next time I play with it I want to try local inference and see how that performs with local models in control.

The experience is very different from the #ellama integration I currently use for general queries. The principle interface is still a chat window but rather than copy and pasting code you can watch the #LLM's internal monologue and then approve requests to edit files and run tools.

The first time I hit a compile error I just told it the build failed and for it to fix the problem. It's quite something watching it invoke make, read the error and then iterate until the problem is fixed.

2/n

Speaking of which, is there a way to configure #Ellama to point to a remote #Ollama service? I haven't been able to find something in the official docs.
Sharing #ellama sessions through #Syncthing was a great idea after all. This, coupled with my cheap-ish on-demand #AI server that I can connect to from my #ThinkPad devices should be a decent jump in my productivity.
Thanks to #ramalamma detecting my #vulkan capable #integratedgpu I can now run a lot of models without the CPU cores melting. I still need to work out the right runes for #ellama to work properly with the #mistral model though.
Just tried #Ellama for #Emacs with DeepSeek-R1:8b on ollama as backend. Works great!

I've been messing around with running LLMs locally on my laptop and seeing how they perform, subjectively, and not very systematically.

I've been using the ellama emacs module, which makes things like summary and code completion very easy.

I'm using llama3.2, which is quite a bit smaller than llama3.1, and runs very easily on my Framework 13, with AMD Ryzen CPU.

🧵...

#llama3 #ellama

Ellama: a tool for interacting with large language models from Emacs. https://github.com/s-kostyaev/ellama #LLMs #Emacs #Ellama
GitHub - s-kostyaev/ellama: Ellama is a tool for interacting with large language models from Emacs.

Ellama is a tool for interacting with large language models from Emacs. - s-kostyaev/ellama

GitHub

@fidel I've mainly been focused on #codellama as the intent seems to be targeted at coding. Both 7b and 34b. 70b is beyond my capacity.

I find it difficult to get an idea of what's happening behind the scenes. It would be quite nice if #ellama placed the request in a buffer so you could see the conversation.

First time translating to French using ellama-translate with zephyr and the result is so-so. At least, this won't stop working like every other Emacs translation library! #emacs #ellama #ollama #zephyr