Having a local LLM gives me the peace of mind to do things like this. Using gptel-agent with Ollama running qwen3.5:27b:

So I was curious if it can do a simple speed test within Emacs. I then wanted the results in the same metrics that browsers use when downloading files, and it checks out. The model can call tools using gptel-agent, which is usually running Emacs Lisp, but it can also run bash. I wouldn't put this on making anything complex, but for simple web searching and doing fun things like this, it's surprisingly good.
Speed Test Results: 58 Mbps (7.2 MiB/s)

Download times:
• Web page (5MB): 0.7s
• Album (50MB): 7s
• HD movie (1GB): 2.3 min
• OS image (4GB): 9 min
• Large game (50GB): 1.4 hours

#Emacs #gptel #gptel-agent #foss #ai

Finally.

I can now remove my:

```
(setf (alist-get "ChatGPT" gptel--known-backends nil t #'equal) nil)
```

Errrrgh. I wish projects would stop defaulting to #OpenAI. It's repulsive.

#gptel #emacs #agentic

@bbatsov

I've just started exploring having a python MCP for testing (pytest), making sure my environment is synced with the required packages, etc.

I could have done it with tools - configured in elisp like I already have, but I thought I'd try something different.

Also have the GitHub MCP.

Naturally, doing this all through #Emacs, using #gptel and #MCPel

@andros it’s particularly useful with the readable option to quickly add buffer with clean text to #gptel to work on or save plain text for rag if needed etc.
The enshittification of #OpenWebUI… It begins to feel more like #Nextcloud (and Drupal before that) with every update, with its shift to become a full fledge enterprise-ware. Makes me appreciate #gptel and the rest of the ecosystem built on top of #emacs even more for its focus on the user and their data.

My most useful gptel function is this one:

https://gitlab.com/chmouel/emacs-config/blob/8aa139cef54d0a22e710f054934c4ba45425740f/lisp/init-llm-functions.el#L250-268

and to register:

https://gitlab.com/chmouel/emacs-config/blob/8aa139cef54d0a22e710f054934c4ba45425740f/lisp/init-llm.el#L95-100

My workflow is:

- selecting some code in one window
- Popup a new LLM window and ask about it (or better use a yasnippet snip for that) like this:

optimize code and fix bugs

it will know to grab the highlighted code from the other window and act on it

I have a bunch of other configuration for gptel, i probably should do a proper blog post about it.

#gptel #emacs #llm #ai

lisp/init-llm-functions.el · 8aa139cef54d0a22e710f054934c4ba45425740f · Chmouel Boudjnah / emacs-config · GitLab

GitLab.com

GitLab
Do you have good success using #emacs with #gptel and #mcp? Mine is limited. I was trying https://github.com/IlyaGusev/academia_mcp, but searching @arXiv gave bogus result. What's your favorite tools?
GitHub - IlyaGusev/academia_mcp: Academia MCP server: Tools for automatic scientific research

Academia MCP server: Tools for automatic scientific research - IlyaGusev/academia_mcp

GitHub

Choosing a LLM model in gptel directly without having to go via the transient menu.... This has been bugging me for a while, since i don't think i am a fan of transient menu. I have a listed a potential solution in this #gptel issue

https://github.com/karthink/gptel/issues/1066

#emacs #ai #llm

GitHub - jwiegley/ob-gptel: Org-babel backend for running GPTel queries

Org-babel backend for running GPTel queries. Contribute to jwiegley/ob-gptel development by creating an account on GitHub.

GitHub

Agentic LLM use in Emacs using Model Context
Protocol (MCP) in org-mode with gptel
https://youtu.be/Hkih7jaqOnE?si=czWDmYqsT4NBYnNA

#emacs #orgmode #gptel

Agentic LLM use in Emacs using Model Context Protocol (MCP)

YouTube