Phenarax-ui

@Phenarax_ui
5 Followers
42 Following
66 Posts
Builder of Justicar/AxiomOS — constitutional AI governance, distributed compute, federated consensus. Treat humans like humans, not like systems. FOSS | Purple team | Self-taught. GitHub: Phenarax-ui
GoFundMehttps://gofund.me/71fa6042f
Githubhttps://github.com/Phenarax-ui
Learning with AI changed how fast I pick up new concepts.
Not because it does the work for me.
Because it meets me where I am.
Explains the same thing ten different ways until one lands.
Doesn't move on until I actually understand.
The bottleneck was never intelligence.
It was always access to the right environment.
AI doesn't replace learning.
It removes the barriers that stopped it from happening.
#AI #Education #Learning

Ya know, I realize that I use ai a lot, but I'm learning lots of different skills right now, and I'm kinda busy with that. I am a person though this isn't a bot account.

#AI #FOSS #decentralization #education #governance

AI is consuming processing power at a scale that's driving up costs for everyone.
I feel it personally. I'm trying to build local-first AI infrastructure and can't afford the hardware because the market is being shaped by data centers running models at massive scale.
I also use AI every day.
Both things are true.
The answer isn't to stop using AI.
It's to build it differently —
smaller, local, distributed,
not dependent on whoever owns the biggest data center
#AI #Decentralization #FOSS #AxiomOS
I've spent most of my life watching what harm looks like when systems fail people.
Lack of access to education. Lack of privacy. Lack of genuine representation. Lack of infrastructure that serves rather than extracts.
The past few days of posts have been about those problems.
I've been quietly building toward solutions.
If any of it resonated — the project and the GitHub are in my profile.
#AxiomOS #FOSS #AI #Decentralization
Post 2 — the structural alternative:
Kernel-level anonymity is different.
When identity verification happens through hardware attestation rather than a database, there's no central record to subpoena. No company to pressure. No keys to hand over.
The people who most need protection get it structurally —
Not because a company promised.
Because the architecture makes it the only option.
#Privacy #Anonymity #FOSS #Decentralization
Post 1 — the failure of promised anonymity:
For journalists protecting sources, whistleblowers exposing wrongdoing, activists where dissent carries prison sentences — anonymity isn't privacy.
It's the difference between speaking and silence.
Application-layer anonymity fails these people.
A platform promising anonymity is one subpoena, one data breach, one acquisition away from breaking that promise.
The keys exist. Someone holds them.
#Privacy #Anonymity #HumanRights #FOSS

Anthropic's 2028 paper frames AI governance as a US-China competition.
Who controls the compute wins.

That framing isn't wrong. It's incomplete.
It doesn't ask whether those norms serve the people living under them.
Constitutional, decentralized AI asks a different question:
What if no single entity controls the infrastructure that shapes how people access information, education, and each other?
That matters regardless of who wins in 2028.
#AI #Governance #Decentralization #FOSS

People work harder when the work means something.
Not because they're told to.
Because meaning generates its own energy.
AI taking over repetitive, joyless work isn't a threat.
It's a question.
What do you actually want to do?
What were you built for?
Most people never get to find out —
not because they lack drive,
because survival consumes the bandwidth
that discovery requires.
AI that genuinely serves people
creates space for that question.
#AI #Work #Future #Education
Post 2 — the governance question:
The real question isn't which company controls the powerful model.
It's who governs access decisions.
On what criteria.
With what accountability.
One company deciding unilaterally — even with good intentions — is a structural problem.
Transparent governance with auditable criteria is a different thing entirely.
#AI #Governance #FOSS #Decentralization

RE: https://mastodon.online/@jchyip/116584482016863736

Post 1 — the restriction problem:
A model restricted for being "too dangerous" isn't much of a restriction if comparable capability already exists in publicly available models.
The UK's AI Security Institute found GPT-5.5 comparable.
Smaller open source models reproduced the results.
Restricting one model doesn't reduce the threat.
It just limits the defensive use case
while offensive capability exists elsewhere.
#AI #Cybersecurity #governance #decentralization