Sau 12 tháng lập trình và hàn, mình đã hoàn thiện hệ thống dry‑fire đa mục tiêu AI, chạy 100 % offline – không cần cloud, không thuê bao. 🎯🤖✨ #AI #Offline #DIY #CôngNghệ #Hardware #TựDo #đồđiện tử #SideProject #DIYAI

https://www.reddit.com/r/SideProject/comments/1qjrzcc/it_took_me_12_months_of_coding_and_soldering_but/

The $100 Shock of NanoChat : 1.9B Parameters, 38B Tokens and What It Means for You

A performance comparison that flips the cost-barrier for AI creation…

Medium

Hoi iedereen! 👋
Vragen aan de community:

Heeft iemand ervaring met deze GPU’s? Welke zou je aanbevelen voor het lokaal draaien van grotere LLMs?
Zijn er andere budgetvriendelijke server-GPU’s die ik misschien heb gemist en die geweldig zijn voor AI-workloads?
Heb je tips voor het bouwen van een kosteneffectieve AI-workstation? (Koeling, voeding, compatibiliteit, enz.)
Wat is jouw favoriete setup voor lokale AI-inferentie? Ik zou graag over jullie ervaringen horen!

Alvast bedankt! 🙌"
#AIServer #LokaleAI #BudgetBuild #LLM #GPUAdvies #ThuisLab #AIHardware #DIYAI #ServerGPU #TweedehandsTech #AIGemeenschap #OpenSourceAI #ZelfGehosteAI #TechAdvies #AIWorkstation #MachineLeren #AIOnderzoek #FediverseAI #LinuxAI #AIBouw #DeepLearning #ServerBouw #BudgetAI #AIEdgeComputing #Vragen #CommunityVragen

Hey everyone 👋

I’m diving deeper into running AI models locally—because, let’s be real, the cloud is just someone else’s computer, and I’d rather have full control over my setup. Renting server space is cheap and easy, but it doesn’t give me the hands-on freedom I’m craving.

So, I’m thinking about building my own AI server/workstation! I’ve been eyeing some used ThinkStations (like the P620) or even a server rack, depending on cost and value. But I’d love your advice!

My Goal:
Run larger LLMs locally on a budget-friendly but powerful setup. Since I don’t need gaming features (ray tracing, DLSS, etc.), I’m leaning toward used server GPUs that offer great performance for AI workloads.

Questions for the Community:
1. Does anyone have experience with these GPUs? Which one would you recommend for running larger LLMs locally?
2. Are there other budget-friendly server GPUs I might have missed that are great for AI workloads?
3. Any tips for building a cost-effective AI workstation? (Cooling, power supply, compatibility, etc.)
4. What’s your go-to setup for local AI inference? I’d love to hear about your experiences!

I’m all about balancing cost and performance, so any insights or recommendations are hugely appreciated.

Thanks in advance! 🙌

@[email protected] #AIServer #LocalAI #BudgetBuild #LLM #GPUAdvice #Homelab #AIHardware #DIYAI #ServerGPU #ThinkStation #UsedTech #AICommunity #OpenSourceAI #SelfHostedAI #TechAdvice #AIWorkstation #LocalAI #LLM #MachineLearning #AIResearch #FediverseAI #LinuxAI #AIBuild #DeepLearning #OpenSourceAI #ServerBuild #ThinkStation #BudgetAI #AIEdgeComputing #Questions #CommunityQuestions #HomeLab #HomeServer #Ailab #llmlab

Is the generative AI market facing a bubble? Trevor Laurence Jockims reveals a notable shift where major tech investments contrast with emerging DIY AI innovations. The TinyZero project showcases how a $30 model can replicate complex reasoning, emphasizing that smaller models may advance AI more than expensive ones. This transition towards affordable solutions could democratize AI development. Explore more about this intriguing evolution in tech. [Source](https://www.cnbc.com/2025/03/27/as-big-tech-bubble-fears-grow-the-30-diy-ai-boom-is-just-starting.html) #AI #TechInnovation #DIYAI
As generative AI bubble fears grow, the ultra low-cost large language model breakthroughs are booming

Fears of a big tech generative AI bubble are growing, but among researchers, it's never been easier to build your own AI on the cheap and watch it learn.

CNBC

OpenAI down? No internet? "Open"AI not so open anymore? You can run your own local LLM chat model called Alpaca. I got it going on my home machine today, while not as good as chatGPT (It's a much smaller model) it's still pretty cool to chat with and it's amazing to think it's running on my old PC and not even using a GPU. (kind of a weird answer to the fibonacci question there)

https://github.com/antimatter15/alpaca.cpp #LLM #chatbot #DIYAI

GitHub - antimatter15/alpaca.cpp: Locally run an Instruction-Tuned Chat-Style LLM

Locally run an Instruction-Tuned Chat-Style LLM . Contribute to antimatter15/alpaca.cpp development by creating an account on GitHub.

GitHub