Diverse perspectives on AI from Rust contributors and maintainers
https://nikomatsakis.github.io/rust-project-perspectives-on-ai/feb27-summary.html
Diverse perspectives on AI from Rust contributors and maintainers
https://nikomatsakis.github.io/rust-project-perspectives-on-ai/feb27-summary.html
I enjoyed reading theses perspectives, they are reasoned and insightful.
I'm undecided about my stance for gen AI in code. We can't just look at the first order and immediate effects, but also at the social, architectural, power and responsibility aspects.
For another area, prose, literature, emails, I am firm in my rejection of gen AI.
I read to connect with other humans, the price of admission is spending the time.
For code, I am not as certain, nowadays I don't regularly see it as an artwork or human expression, it is a technical artifact where craftsmanship can be visible.
Will gen AI be the equivalent of a compiler and in 20 years everyone depends on their proprietary compiler/IDE company?
Can it even advance beyond patterns/approaches that we have built until then?
I have many more questions and few answers and both embracing and rejecting feels foolish.
> Hopefully it continues to get commoditized to the point where no monopoly can get a stranglehold on it
I believe this is the natural end-state for LLM based AI but the danger of these companies even briefly being worth trillions of dollars is that they are likely to start caring about (and throwing lobbying money around) AI-related intellectual property concerns that they've never shown to anyone else while building their models and I don't think it is far fetched to assume they will attempt all manner of underhanded regulatory capture in the window prior to when commoditization would otherwise occur naturally.
All three of OpenAI, Google and Anthropic have already complained about their LLMs being ripped off.
https://www.latimes.com/business/story/2026-02-13/openai-acc...
https://cloud.google.com/blog/topics/threat-intelligence/dis...
https://fortune.com/2026/02/24/anthropic-china-deepseek-thef...
The problem is the cat is already out of the bag on the technology. Anyone can go over to Huggingface, follow a cookbook [0], and build their own models from the ground up. He cannot prevent that from taking place or other organizations releasing full open weight/open training data models as well, on permissive licenses, which give individuals access to be able to modify those models as they see fit. Sam wishes he had control over that but he doesn't nor will he ever.
I know someone who just spent 10 days of GPU time on a RTX 3060 to build a DSLM [0] which outperforms existing, VC backed (including from Sam himself), frontier model wrappers that runs on sub 500 dollar consumer hardware which provides 100% accurate work product, which those frontier model wrappers cannot do. The fact that a two man team in a backwater flyover town speaks to how out of the badly out of the bag the tech is. Where the money is going to be isn't based off of building the biggest models possible with all of the data, its going to be about building models which specifically solve problems and can run affordably within enterprise environments by building to proprietary data since thats the differentiator for most businesses. Anthropic/OAI just do not have the business model to support this mode of model development to customers who will reliably pay.
[0] https://www.gartner.com/en/articles/domain-specific-language...