| Website | https://tedcarstensen.com |
| Location | Bay Area, California |
| My Homie | Claude |
| Website | https://tedcarstensen.com |
| Location | Bay Area, California |
| My Homie | Claude |
sigh, I don't want to start over!
If a new chat can reference previous chats, why isn't there an option (with caveats and explanations if necessary) to simply keep going? Claude Code can run /compact over and over and still be quite effective, why is the chat app so far behind?
This pr had been open for almost *18 months* to add vulkan support to ollama (which llama.cpp supported for a long time!), ollama seemed motivated to only support nvidia chips 🤔 https://github.com/ollama/ollama/pull/5059
3 days ago someone commented that docker model runner added vulkan support with an openai compatible API - huzzah!
I start wiring docker model runner up to my open web ui install, and this notification comes through 🤡 better late than never, but extremely weak sauce how they handled this