The Local Alternative
The Local Alternative
@davidrevoy @jimmac Yep. I tried, just to see, if running a smaller targeted model in something like Ollama would be any more interesting to use with Home Assistant than it's built-in parser system (which I've added to with my own automations).
A _slight_ mis-hear of me setting a timer caused it to spew out some completely useless garbage on significant delay.
Even making HA matches take priority, it'd still, using the very thing LLM's are _supposed_ to be good at, screw up interpretations. "is the fan on in the hallway?" and it'd say "I don't know about any fans in Home Assistant hallway" or something but the fan was very on, and very in the Hallway room (later trying the correct HA syntax answered exactly right VERY fast).
I got rid of it. I'd rather be able to ramble at my speaker and say "Pizza pasta put it in your mouth" and it reply with "I'm not aware of any area called 'your mouth'" in 2 seconds or "Sorry, I didn't understand that" if I'm even more unintelligible (or just 'off' with my command), and only have the STT and TTS overheads on my GPU, than have the dice roller fuck up repeatedly.
The only thing it was halfway decent at was me basically tossing it a JSON dump from the weather forecast command and going "Here make a conversational thing about the next couple days". It was actually pretty good at that, but not worth it. Rewrote that as just my own writing to say exact temps and conditions for each of the next 3 days. My brain can track the similarities hearing it.