Diverse perspectives on AI from Rust contributors and maintainers

https://nikomatsakis.github.io/rust-project-perspectives-on-ai/feb27-summary.html

Summary - Rust Project Perspectives on AI

I enjoyed reading theses perspectives, they are reasoned and insightful.

I'm undecided about my stance for gen AI in code. We can't just look at the first order and immediate effects, but also at the social, architectural, power and responsibility aspects.

For another area, prose, literature, emails, I am firm in my rejection of gen AI.
I read to connect with other humans, the price of admission is spending the time.

For code, I am not as certain, nowadays I don't regularly see it as an artwork or human expression, it is a technical artifact where craftsmanship can be visible.

Will gen AI be the equivalent of a compiler and in 20 years everyone depends on their proprietary compiler/IDE company?

Can it even advance beyond patterns/approaches that we have built until then?

I have many more questions and few answers and both embracing and rejecting feels foolish.

I'm worried about a few big companies owning the means of production for software and tightening the screws.
This is my immediate concern as well. Sam said in an interview that he sees "intelligence" as a utility that companies like OpenAI would own and rent out.

The problem is the cat is already out of the bag on the technology. Anyone can go over to Huggingface, follow a cookbook [0], and build their own models from the ground up. He cannot prevent that from taking place or other organizations releasing full open weight/open training data models as well, on permissive licenses, which give individuals access to be able to modify those models as they see fit. Sam wishes he had control over that but he doesn't nor will he ever.

[0] https://huggingface.co/docs/transformers/index

Transformers · Hugging Face

We’re on a journey to advance and democratize artificial intelligence through open source and open science.

Im thinking mainly if they manage to get some kind of regulations that make open source impractical for commercial use, or hardware gets too expensive for small hobbyists and bootstrapped startups, or if the large data center models wildly out class open source models. I love using open source models but I can't do what I can do with 1m context opus, and that gap could get worse? Or maybe not, it could close, I don't know for sure, and how long will Chinese companies keep giving out their open source models? Lots of unknowns.

I know someone who just spent 10 days of GPU time on a RTX 3060 to build a DSLM [0] which outperforms existing, VC backed (including from Sam himself), frontier model wrappers that runs on sub 500 dollar consumer hardware which provides 100% accurate work product, which those frontier model wrappers cannot do. The fact that a two man team in a backwater flyover town speaks to how out of the badly out of the bag the tech is. Where the money is going to be isn't based off of building the biggest models possible with all of the data, its going to be about building models which specifically solve problems and can run affordably within enterprise environments by building to proprietary data since thats the differentiator for most businesses. Anthropic/OAI just do not have the business model to support this mode of model development to customers who will reliably pay.

[0] https://www.gartner.com/en/articles/domain-specific-language...

Domain‑Specific Language Models as Enterprise AI Precision Tools

Learn how domain‑specific language models boost accuracy, cut development costs and unlock enterprise value by solving targeted business and industry needs.

Gartner