Diverse perspectives on AI from Rust contributors and maintainers
https://nikomatsakis.github.io/rust-project-perspectives-on-ai/feb27-summary.html
Diverse perspectives on AI from Rust contributors and maintainers
https://nikomatsakis.github.io/rust-project-perspectives-on-ai/feb27-summary.html
AI ultimately breaks the social contract.
Sure, people are not perfect, but there are established common values that we don't need to convey in a prompt.
With AI, despite its usefulness, you are never sure if it understands these values. That might be somewhat embedded in the training data, but we all know these properties are much more swayable and unpredictable than those of a human.
It was never about the LLM to begin with.
If Linus Torvalds makes a contribution to the Linux kernel without actually writing the code himself but assigns it to a coding assistant, for better or worse I will 100% accept it on face value. This is because I trust his judgment (I accept that he is fallible as any other human). But if an unknown contributor does the same, even though the code produced is ultimately high quality, you would think twice before merging.
I mean, we already see this in various GitHub projects. There are open-source solutions that whitelist known contributors and it appears that GitHub might be allowing you to control this too.

Hey everyone, I wanted to provide an update on a critical issue affecting the open source community: the increasing volume of low-quality contributions that is creating significant operational chal...
Generational churn breaks social contract.
You all using Latin and believing in the old Greek gods to honor the dead?
Muricans still owning slaves from Africa?
All ways in which old social contracts were broken at one point.
We are not VHS cassettes with an obligation to play out a fuzzy memory of history.
> AI ultimately breaks the social contract
Business schools teach that breaking the social contract is a disruption opportunity for growth, not a negative,
The Hacker in Hacker News refers to "growth hacking" now, not hacking code
It depends who you ask.
You cannot say that breaking the social contract (the fabric of society, if you will) is generally a good thing, although I am sure some will find opportunities for growth.
After all, the phoenix must burn to emerge, but let's not romanticise the fire.
> You cannot say that breaking the social contract (the fabric of society, if you will) is generally a good thing
I am not saying it's a good thing, just that it's a common attitude here
I suppose it didn't come through in my original post, but I was trying to be critical
The problem is that it acts as an accountability sink even when it is attached.
I've had multiple coworkers over the past few months tell me obvious, verifiable untruths. Six months ago, I would have had a clear term for this: they lied to me. They told me something that wasn't true, that they could not possibly have thought was true, and they did it to manipulate me into doing what they want. I would have demanded and their manager would have agreed that they need to be given a severe talking to.
But now I can't call it a lie, both in the sense that I've been instructed not to and in the sense that it subjectively wasn't. They honestly represented what the agent told them was the truth, and they honestly thought that asking an agent to do some exploration was the best way to give me accurate information.
What's the replacement norm that will prevent people from "flooding the zone" with false AI-generated claims shaped to get people to do what they want? Even if AI detection tools worked, which I emphasize that they do not, they wouldn't have stopped the incidents that involved human-generated summaries of false AI information.
I forgot to mention why I brought up the idea of who is making the contribution rather than how (i.e., through an LLM).
Right now, the biggest issue open-source maintainers are facing is an ever-increasing supply of PRs. Before coding assistants, those PRs didn't get pushed not because they were never written (although obviously there were fewer in quantity) but because contributors were conscious of how their contributions might be perceived. In many cases, the changes never saw the light of day outside of the fork.
LLMs don't second-guess whether a change is worth submitting, and they certainly don't feel the social pressure of how their contribution might be received. The filter is completely absent.
So I don't think the question is whether machine-generated code is low quality at all, because that is hard to judge, and frankly coding assistants can certainly produce high-quality code (with guidance). The question is who made the contribution. With rising volumes, we will see an increasing amount of rejections.
By the way, we do this too internally. We have a script that deletes LLM-generated PRs automatically after some time. It is just easier and more cost-effective than reviewing the contribution. Also, PRs get rejected for the smallest of reasons.
If it doesn't pass the smell test moments after the link is opened, it get's deleted.
Prioritizing or deferring to existing contributors happens in pretty much every human endeavor.
As you point out this of course predates the age of LLM, in many ways it's basic human tribal behavior.
This does have its own set of costs and limitations however. Judgement is hard to measure. Humans create sorting bonds that may optimize for prestige or personal ties over strict qualifications or ability. The tribe is useful, but it can also be ugly. Perhaps in a not too distant future, in some domains or projects these sorts of instincts will be rendered obsolete by projects willing to accept any contribution that satisfies enough constraints, thereby trading human judgement for the desired mix of velocity and safety. Perhaps as the agents themselves improve this tension becomes less an act of external constraint but an internal guide. And what would this be, if not a simulation of judgement itself?
You could also do it in stages, ie have a delegated agent promote people to some purgatory where there is at least some hope of human intervention to attain the same rights and privileges as pre-existing contributors, that is if said agent deems your attempt worthy enough. Or maybe to fight spam an earnest contributor will have to fork over some digital currency, to essentially pay the cost of requesting admission.
All of these scenarios are rather familiar in terms of the history of human social arrangements.
That is just to say, there is no destruction of the social contract here. Only another incremental evolution.
>It takes care and careful engineering to produce good results. One must work to keep the models within the flight envelope. One has to carefully structure the problem, provide the right context and guidance, and give appropriate tools and a good environment. One must think about optimizing the context window; one must be aware of its limitations.
In other words, one has to lean into the exact opposite tendencies of those which generally make people reach for AI ;)
The implication being that execs want folks who "reach for AI" to meet some arbitrary contract targets? Sounds like optimizing for the wrong things but I've seen crazier schemes.
In my opinion the end goal of those execs pushing AI is the age old goal of seizing the means of production (of software in this case) by reducing the worker to a machine. It'll likely play out in their favor honestly, as it has many times in the past.
The industry and the wider world are full steam ahead with AI, but the following takes (from the article) are the ones that resonate with me. I don't use AI directly in my work for reasons similar to those expressed here[1].
For the record, I'll use it as a better web search or intro to a set of ideas or topic. But i no longer use it to generate code or solutions.
1. https://nikomatsakis.github.io/rust-project-perspectives-on-...
I enjoyed reading theses perspectives, they are reasoned and insightful.
I'm undecided about my stance for gen AI in code. We can't just look at the first order and immediate effects, but also at the social, architectural, power and responsibility aspects.
For another area, prose, literature, emails, I am firm in my rejection of gen AI.
I read to connect with other humans, the price of admission is spending the time.
For code, I am not as certain, nowadays I don't regularly see it as an artwork or human expression, it is a technical artifact where craftsmanship can be visible.
Will gen AI be the equivalent of a compiler and in 20 years everyone depends on their proprietary compiler/IDE company?
Can it even advance beyond patterns/approaches that we have built until then?
I have many more questions and few answers and both embracing and rejecting feels foolish.
> Hopefully it continues to get commoditized to the point where no monopoly can get a stranglehold on it
I believe this is the natural end-state for LLM based AI but the danger of these companies even briefly being worth trillions of dollars is that they are likely to start caring about (and throwing lobbying money around) AI-related intellectual property concerns that they've never shown to anyone else while building their models and I don't think it is far fetched to assume they will attempt all manner of underhanded regulatory capture in the window prior to when commoditization would otherwise occur naturally.
All three of OpenAI, Google and Anthropic have already complained about their LLMs being ripped off.
https://www.latimes.com/business/story/2026-02-13/openai-acc...
https://cloud.google.com/blog/topics/threat-intelligence/dis...
https://fortune.com/2026/02/24/anthropic-china-deepseek-thef...
The problem is the cat is already out of the bag on the technology. Anyone can go over to Huggingface, follow a cookbook [0], and build their own models from the ground up. He cannot prevent that from taking place or other organizations releasing full open weight/open training data models as well, on permissive licenses, which give individuals access to be able to modify those models as they see fit. Sam wishes he had control over that but he doesn't nor will he ever.
I know someone who just spent 10 days of GPU time on a RTX 3060 to build a DSLM [0] which outperforms existing, VC backed (including from Sam himself), frontier model wrappers that runs on sub 500 dollar consumer hardware which provides 100% accurate work product, which those frontier model wrappers cannot do. The fact that a two man team in a backwater flyover town speaks to how out of the badly out of the bag the tech is. Where the money is going to be isn't based off of building the biggest models possible with all of the data, its going to be about building models which specifically solve problems and can run affordably within enterprise environments by building to proprietary data since thats the differentiator for most businesses. Anthropic/OAI just do not have the business model to support this mode of model development to customers who will reliably pay.
[0] https://www.gartner.com/en/articles/domain-specific-language...