My Journey to a reliable and enjoyable locally hosted voice assistant

I have been watching HomeAssistant’s progress with assist for some time. We previously used Google Home via Nest Minis, and have switched to using fully local assist backed by local first + llama.cpp (previously Ollama). In this post I will share the steps I took to get to where I am today, the decisions I made and why they were the best for my use case specifically. Links to Additional Improvements Here are links to additional improvements posted about in this thread. New Features Security C...

Home Assistant Community

Tác giả tiểu thuyết mạng 24,000 chữ hỏi liệu có thể dùng AI cục bộ viết tiếp nội dung theo phong cách từng được viết. Cần hướng dẫn chọn model phù hợp và cách thiết lập AI để tiếp tục sáng tạo tác phẩm. Tham khảo cộng đồng r/LocalLLaMA. #AI #TiểuThuyết #WebNovel #LocalHosting

https://www.reddit.com/r/LocalLLaMA/comments/1oiuz2a/local_hosting_question/

Bài học về HostingMést ở 2025: DGXSpark, FrameworkDesktop, M4/M5! 💡 Budget $4K cho xử lý dữ liệu & code. Confronto: linh hoạt vs chi phí. Hỏi về nguồn học. Tags: #Hosting #AI #M4Mac #DGXSpark #VietnameseTech #LocalHosting

https://www.reddit.com/r/LocalLLaMA/comments/1o9d13w/best_hardware_and_models_to_get_started_with/

Can I Run This LLM?

Planning to locally host an LLM 🤖. Having a lot of internal struggles figuring out the best price curve.

A used NVIDIA Jetson Xavier NX can be acquired for ~$250 💰.
A 4060 w/ 16gb for ~$400 💳.

Not sure I really want to do either 🤔. Waiting for NPU support on Transformer models may be a better approach.

#LLM #LocalHosting #NVIDIA #AI #MachineLearning #NPU #Transformers #TechDilemmas #HardwareChoices