@xahteiwi
> I tried a number of different models under ollama on my Pi5 4GB w 4GB swap. Some models had quite reasonable response start times, but in general the response quality was simply laughable.
Without a context MCP to filter down the responses, olama with any of the minimum to medium token models do not produce responses useful to my home robots.
I wrote about it here
https://forum.dexterindustries.com/t/talking-to-your-robot-can-be-interesting-knowledge-transfer-between-robots/10680?u=cyclicalobsessive




Olli Graf🚟
still love my pi's
