Do you hate #broligarchs?
#Billionaires? #AiSlop but still think there is merit in #AI?

Here is my proposal for a stand alone.
OFFGRID COMMUNITY AI SYSTEM.

That's right.Your very own co-op AI

The calculations are very much back of the envelope, first cut, but quite feasible.
A 32billion parameters, frontier level performance compatable open source #llm model. The power requirements is that of 3AC units including cooling. Serves 15-20 concurrent users. 40 households of 4 people each (taking into account actual AI model distributed use metrics and contention ratios)

40 households, subscribing at $30/month over 2 years + power (solar). Train with your own datasets.
Entire set up takes half a rack.

LETS GO!!!

#OpenSource #FOSS #CommunityTech #OpenHardware #EthicalAI #ResponsibleAI #AIForGood #TechForGood #Solarpunk #RegenerativeCulture #Degrowth #AppropriateTechnology #OffGrid #SelfSufficient #Homesteading #Permaculture #RightToRepair #MakerSpace #DIYTech #decentralizedtech

@n_dimension also, I think if we trim down to quality datasets - like Wikipedia and open-source books, I think we can build a smaller model that runs on lower spec hardware.

I run Qwen-3/Jan-code models on my RTX 2060 - no sweat for inference and I can use it for 80-90% of my work. Its like having an interactive encyclopedia offline. I love it.

Specific models for specific use-cases/communities might also be a good idea. Like an agri-trained llm for agriculture

@mahadevank

Google has just released a super tight, great local #LLM I had not had a chance to look at it yet

I really like your agri model idea.

I was thinking a basic medical (nurse level) one for the third world/post-collapse.

@n_dimension @mahadevank I recall there was some research a while back which showed that domain specific fine tuning really did not work well.

There was attempts at training astronomy specific models, and while they outperformed models of a similar size at questions like "describe the lightcurve of binary star mergers" they suffered from much higher hallucination rates, and performed worse at generalising outside of the specific documents they were fine tuned on.

Now admittedly, this was back in the Llama2 days so maybe "modern" architectures would behave differently. But it seems that a broad dataset is necessary for generalising, even within a specific domain

@mahadevank @AuntyRed

The models push to prod every week a .. release

However, I don't doubt the research. Multidimensional vector trees that underpin #LLM s have some very peculiar traversal patterns.

To be honest, I did not have enough time to look into the local models. There is a local education RAG hybrid I stumbled upon, that actually looks real solid. The purdy picture is all I have tho.