Man, I'm obsessed with integrating #MistralAI models into my #homeautomation and #homelab operations. Even set up a custom #node in #NodeRed to prompt the #AI. All my alerts are very personalized now and I love it. <3

#llm #largelanguagemodel #cool

@wagesj45 how do you run mistral? (mixtral??)
@kellogh @wagesj45 Yea, interesting question. Locally on your GPU? Which?
@Mawoka @kellogh I run #Mistral #7B Instruct via CPU on a #Debian #Linux server via #API through #oobabooga. I didn't need realtime responses, and I had spare CPU power, so it worked out perfectly.
@wagesj45
"Not real time" means how long? And which CPU are you using? Sorry for all those questions 🙈
@kellogh
@Mawoka @kellogh Oh its still plenty fast. Just not realtime speaker-assistant level fast. It can take a long prompt and generate a response in as fast as 30 seconds for an alarm with all my calendar events for the day, or up to a minute or two with a prompt with a few hundred sensors and their states.
@Mawoka @kellogh It's running in a VM, so the machine just sees it as a generic CPU. The physical machine has some Xenon chip that I can't remember without logging in to see. It was a used server.
@wagesj45 @Mawoka how do you do text-to-speech? something simple? or an LLM?
@kellogh @Mawoka I take the output of the Mistral API call, then pipe that into the TTS function of Home Assistant, which runs the text through Piper AI to generate an audio file, and it sends that audio to any of the speaker devices available to Home Assistant.