Do you hate #broligarchs?
#Billionaires? #AiSlop but still think there is merit in #AI?

Here is my proposal for a stand alone.
OFFGRID COMMUNITY AI SYSTEM.

That's right.Your very own co-op AI

The calculations are very much back of the envelope, first cut, but quite feasible.
A 32billion parameters, frontier level performance compatable open source #llm model. The power requirements is that of 3AC units including cooling. Serves 15-20 concurrent users. 40 households of 4 people each (taking into account actual AI model distributed use metrics and contention ratios)

40 households, subscribing at $30/month over 2 years + power (solar). Train with your own datasets.
Entire set up takes half a rack.

LETS GO!!!

#OpenSource #FOSS #CommunityTech #OpenHardware #EthicalAI #ResponsibleAI #AIForGood #TechForGood #Solarpunk #RegenerativeCulture #Degrowth #AppropriateTechnology #OffGrid #SelfSufficient #Homesteading #Permaculture #RightToRepair #MakerSpace #DIYTech #decentralizedtech

@n_dimension also, I think if we trim down to quality datasets - like Wikipedia and open-source books, I think we can build a smaller model that runs on lower spec hardware.

I run Qwen-3/Jan-code models on my RTX 2060 - no sweat for inference and I can use it for 80-90% of my work. Its like having an interactive encyclopedia offline. I love it.

Specific models for specific use-cases/communities might also be a good idea. Like an agri-trained llm for agriculture

@mahadevank

Google has just released a super tight, great local #LLM I had not had a chance to look at it yet

I really like your agri model idea.

I was thinking a basic medical (nurse level) one for the third world/post-collapse.

@n_dimension do you have the name, I'd love to give it a go

@mahadevank

#Gemma4 it comes in 4 sizes with the biggest at 32B parameters and apparently it runs pretty decent on...a mobile device!!!

Apache license.
Sounds like #LLAMA2 #Quen #deepseek killer.

Let me know if you remember its utility, please.
Its literally just out.

@n_dimension nice! I'll get that downloaded - I use llama-cli for the models, works great. llama-server does some funny stuff and fails to allocate a ton of memory, no idea why

@n_dimension gemma-4 didnt work on llama-cli. Might give it a try later but honestly, i'm very happy with the models i have on local

The RAG thing, i'm not sold on - most of them dont send the right slices of context to the LLM so you get weird responses. I'd rather train a new model.

Also, the probabilistic nature of all models and therefore whatever thinking etc. happens means human verification at all levels, for sure.