Diverse perspectives on AI from Rust contributors and maintainers

https://nikomatsakis.github.io/rust-project-perspectives-on-ai/feb27-summary.html

Summary - Rust Project Perspectives on AI

AI ultimately breaks the social contract.

Sure, people are not perfect, but there are established common values that we don't need to convey in a prompt.

With AI, despite its usefulness, you are never sure if it understands these values. That might be somewhat embedded in the training data, but we all know these properties are much more swayable and unpredictable than those of a human.

It was never about the LLM to begin with.

If Linus Torvalds makes a contribution to the Linux kernel without actually writing the code himself but assigns it to a coding assistant, for better or worse I will 100% accept it on face value. This is because I trust his judgment (I accept that he is fallible as any other human). But if an unknown contributor does the same, even though the code produced is ultimately high quality, you would think twice before merging.

I mean, we already see this in various GitHub projects. There are open-source solutions that whitelist known contributors and it appears that GitHub might be allowing you to control this too.

https://github.com/orgs/community/discussions/185387

Exploring Solutions to Tackle Low-Quality Contributions on GitHub · community · Discussion #185387

Hey everyone, I wanted to provide an update on a critical issue affecting the open source community: the increasing volume of low-quality contributions that is creating significant operational chal...

GitHub

Generational churn breaks social contract.

You all using Latin and believing in the old Greek gods to honor the dead?

Muricans still owning slaves from Africa?

All ways in which old social contracts were broken at one point.

We are not VHS cassettes with an obligation to play out a fuzzy memory of history.

> AI ultimately breaks the social contract

Business schools teach that breaking the social contract is a disruption opportunity for growth, not a negative,

The Hacker in Hacker News refers to "growth hacking" now, not hacking code

It depends who you ask.

You cannot say that breaking the social contract (the fabric of society, if you will) is generally a good thing, although I am sure some will find opportunities for growth.

After all, the phoenix must burn to emerge, but let's not romanticise the fire.

> You cannot say that breaking the social contract (the fabric of society, if you will) is generally a good thing

I am not saying it's a good thing, just that it's a common attitude here

I suppose it didn't come through in my original post, but I was trying to be critical

An agent is still attached to an accountable human. If it is not, ignore it.
How do you figure out which is the case, at scale?

The problem is that it acts as an accountability sink even when it is attached.

I've had multiple coworkers over the past few months tell me obvious, verifiable untruths. Six months ago, I would have had a clear term for this: they lied to me. They told me something that wasn't true, that they could not possibly have thought was true, and they did it to manipulate me into doing what they want. I would have demanded and their manager would have agreed that they need to be given a severe talking to.

But now I can't call it a lie, both in the sense that I've been instructed not to and in the sense that it subjectively wasn't. They honestly represented what the agent told them was the truth, and they honestly thought that asking an agent to do some exploration was the best way to give me accurate information.

What's the replacement norm that will prevent people from "flooding the zone" with false AI-generated claims shaped to get people to do what they want? Even if AI detection tools worked, which I emphasize that they do not, they wouldn't have stopped the incidents that involved human-generated summaries of false AI information.

I forgot to mention why I brought up the idea of who is making the contribution rather than how (i.e., through an LLM).

Right now, the biggest issue open-source maintainers are facing is an ever-increasing supply of PRs. Before coding assistants, those PRs didn't get pushed not because they were never written (although obviously there were fewer in quantity) but because contributors were conscious of how their contributions might be perceived. In many cases, the changes never saw the light of day outside of the fork.

LLMs don't second-guess whether a change is worth submitting, and they certainly don't feel the social pressure of how their contribution might be received. The filter is completely absent.

So I don't think the question is whether machine-generated code is low quality at all, because that is hard to judge, and frankly coding assistants can certainly produce high-quality code (with guidance). The question is who made the contribution. With rising volumes, we will see an increasing amount of rejections.

By the way, we do this too internally. We have a script that deletes LLM-generated PRs automatically after some time. It is just easier and more cost-effective than reviewing the contribution. Also, PRs get rejected for the smallest of reasons.

If it doesn't pass the smell test moments after the link is opened, it get's deleted.

Prioritizing or deferring to existing contributors happens in pretty much every human endeavor.

As you point out this of course predates the age of LLM, in many ways it's basic human tribal behavior.

This does have its own set of costs and limitations however. Judgement is hard to measure. Humans create sorting bonds that may optimize for prestige or personal ties over strict qualifications or ability. The tribe is useful, but it can also be ugly. Perhaps in a not too distant future, in some domains or projects these sorts of instincts will be rendered obsolete by projects willing to accept any contribution that satisfies enough constraints, thereby trading human judgement for the desired mix of velocity and safety. Perhaps as the agents themselves improve this tension becomes less an act of external constraint but an internal guide. And what would this be, if not a simulation of judgement itself?

You could also do it in stages, ie have a delegated agent promote people to some purgatory where there is at least some hope of human intervention to attain the same rights and privileges as pre-existing contributors, that is if said agent deems your attempt worthy enough. Or maybe to fight spam an earnest contributor will have to fork over some digital currency, to essentially pay the cost of requesting admission.

All of these scenarios are rather familiar in terms of the history of human social arrangements.

That is just to say, there is no destruction of the social contract here. Only another incremental evolution.

>It takes care and careful engineering to produce good results. One must work to keep the models within the flight envelope. One has to carefully structure the problem, provide the right context and guidance, and give appropriate tools and a good environment. One must think about optimizing the context window; one must be aware of its limitations.

In other words, one has to lean into the exact opposite tendencies of those which generally make people reach for AI ;)

I'm not sure there is a "normal" tendency to reach for AI. But there is certainly parallel in that, say, javascript and PHP have a reputation of being preferred by barely able people who make interesting and useful things with atrocious code.
I've seen rust codebases that would make you cry along with perfectly well architected applications written in both perl and php. You're just playing into common language silo stereotypes. A competent developer can author code in their language of choice whatever that may be. I'm not sure "reaching for AI" implies anything besides that some folk prefer that tool for their work. I personally don't have a tendency to reach for AI, but that doesn't somehow imply they or I are "lesser" because of it.
It does to executives who sign the checks to ai usage contracts

The implication being that execs want folks who "reach for AI" to meet some arbitrary contract targets? Sounds like optimizing for the wrong things but I've seen crazier schemes.

In my opinion the end goal of those execs pushing AI is the age old goal of seizing the means of production (of software in this case) by reducing the worker to a machine. It'll likely play out in their favor honestly, as it has many times in the past.

The industry and the wider world are full steam ahead with AI, but the following takes (from the article) are the ones that resonate with me. I don't use AI directly in my work for reasons similar to those expressed here[1].

For the record, I'll use it as a better web search or intro to a set of ideas or topic. But i no longer use it to generate code or solutions.

1. https://nikomatsakis.github.io/rust-project-perspectives-on-...

Summary - Rust Project Perspectives on AI

I just completely shifted my mindo n that as well. I used to think I can just ai code everything, but it just worked because I started at a good codebase that I built after a while it was the AIs codebase and neither it, nor me could really work in it, till I entangled it.

I enjoyed reading theses perspectives, they are reasoned and insightful.

I'm undecided about my stance for gen AI in code. We can't just look at the first order and immediate effects, but also at the social, architectural, power and responsibility aspects.

For another area, prose, literature, emails, I am firm in my rejection of gen AI.
I read to connect with other humans, the price of admission is spending the time.

For code, I am not as certain, nowadays I don't regularly see it as an artwork or human expression, it is a technical artifact where craftsmanship can be visible.

Will gen AI be the equivalent of a compiler and in 20 years everyone depends on their proprietary compiler/IDE company?

Can it even advance beyond patterns/approaches that we have built until then?

I have many more questions and few answers and both embracing and rejecting feels foolish.

I'm worried about a few big companies owning the means of production for software and tightening the screws.
This is my immediate concern as well. Sam said in an interview that he sees "intelligence" as a utility that companies like OpenAI would own and rent out.
Hopefully it continues to get commoditized to the point where no monopoly can get a stranglehold on it, since the end product ("intelligence") can be swapped out with little concern over who is providing it.

> Hopefully it continues to get commoditized to the point where no monopoly can get a stranglehold on it

I believe this is the natural end-state for LLM based AI but the danger of these companies even briefly being worth trillions of dollars is that they are likely to start caring about (and throwing lobbying money around) AI-related intellectual property concerns that they've never shown to anyone else while building their models and I don't think it is far fetched to assume they will attempt all manner of underhanded regulatory capture in the window prior to when commoditization would otherwise occur naturally.

All three of OpenAI, Google and Anthropic have already complained about their LLMs being ripped off.

https://www.latimes.com/business/story/2026-02-13/openai-acc...

https://cloud.google.com/blog/topics/threat-intelligence/dis...

https://fortune.com/2026/02/24/anthropic-china-deepseek-thef...

OpenAI accuses China's DeepSeek of stealing AI technology

OpenAI accuses China’s DeepSeek of stealing AI technology

Los Angeles Times
Which is a wildly hypocritical tack for them to take considering how all their models were created, but I certainly wouldn’t be surprised if they did.

The problem is the cat is already out of the bag on the technology. Anyone can go over to Huggingface, follow a cookbook [0], and build their own models from the ground up. He cannot prevent that from taking place or other organizations releasing full open weight/open training data models as well, on permissive licenses, which give individuals access to be able to modify those models as they see fit. Sam wishes he had control over that but he doesn't nor will he ever.

[0] https://huggingface.co/docs/transformers/index

Transformers · Hugging Face

We’re on a journey to advance and democratize artificial intelligence through open source and open science.

Im thinking mainly if they manage to get some kind of regulations that make open source impractical for commercial use, or hardware gets too expensive for small hobbyists and bootstrapped startups, or if the large data center models wildly out class open source models. I love using open source models but I can't do what I can do with 1m context opus, and that gap could get worse? Or maybe not, it could close, I don't know for sure, and how long will Chinese companies keep giving out their open source models? Lots of unknowns.

I know someone who just spent 10 days of GPU time on a RTX 3060 to build a DSLM [0] which outperforms existing, VC backed (including from Sam himself), frontier model wrappers that runs on sub 500 dollar consumer hardware which provides 100% accurate work product, which those frontier model wrappers cannot do. The fact that a two man team in a backwater flyover town speaks to how out of the badly out of the bag the tech is. Where the money is going to be isn't based off of building the biggest models possible with all of the data, its going to be about building models which specifically solve problems and can run affordably within enterprise environments by building to proprietary data since thats the differentiator for most businesses. Anthropic/OAI just do not have the business model to support this mode of model development to customers who will reliably pay.

[0] https://www.gartner.com/en/articles/domain-specific-language...

Domain‑Specific Language Models as Enterprise AI Precision Tools

Learn how domain‑specific language models boost accuracy, cut development costs and unlock enterprise value by solving targeted business and industry needs.

Gartner
This has already happened or happening quite fast with cloud. Where setting up own data center, or even few servers could be crime against humanity if it does not use whole Kubernetes/Devops/Observability stack.
Given how fast the Open Source models have been able to catch up their closed-source counterparts, I think at least on the model/software side this will be a non-issue. The hardware situation is a bit grimmer, especially with the recent RAM prices. Time will tell: if in 2–3 years time, we can get to a situation where a 512GB–1TB VRAM / unified memory + good fp8 rig is a few thousands and not tens of thousands of dollars, we'll probably be good.