Man, I'm obsessed with integrating #MistralAI models into my #homeautomation and #homelab operations. Even set up a custom #node in #NodeRed to prompt the #AI. All my alerts are very personalized now and I love it. <3

#llm #largelanguagemodel #cool

@wagesj45 how do you run mistral? (mixtral??)
@kellogh @wagesj45 Yea, interesting question. Locally on your GPU? Which?
@Mawoka @kellogh I run #Mistral #7B Instruct via CPU on a #Debian #Linux server via #API through #oobabooga. I didn't need realtime responses, and I had spare CPU power, so it worked out perfectly.
@wagesj45
"Not real time" means how long? And which CPU are you using? Sorry for all those questions 🙈
@kellogh
@Mawoka @kellogh It's running in a VM, so the machine just sees it as a generic CPU. The physical machine has some Xenon chip that I can't remember without logging in to see. It was a used server.
@wagesj45 @Mawoka how do you do text-to-speech? something simple? or an LLM?
@kellogh @Mawoka I take the output of the Mistral API call, then pipe that into the TTS function of Home Assistant, which runs the text through Piper AI to generate an audio file, and it sends that audio to any of the speaker devices available to Home Assistant.