Diverse perspectives on AI from Rust contributors and maintainers
https://nikomatsakis.github.io/rust-project-perspectives-on-ai/feb27-summary.html
Diverse perspectives on AI from Rust contributors and maintainers
https://nikomatsakis.github.io/rust-project-perspectives-on-ai/feb27-summary.html
AI ultimately breaks the social contract.
Sure, people are not perfect, but there are established common values that we don't need to convey in a prompt.
With AI, despite its usefulness, you are never sure if it understands these values. That might be somewhat embedded in the training data, but we all know these properties are much more swayable and unpredictable than those of a human.
It was never about the LLM to begin with.
If Linus Torvalds makes a contribution to the Linux kernel without actually writing the code himself but assigns it to a coding assistant, for better or worse I will 100% accept it on face value. This is because I trust his judgment (I accept that he is fallible as any other human). But if an unknown contributor does the same, even though the code produced is ultimately high quality, you would think twice before merging.
I mean, we already see this in various GitHub projects. There are open-source solutions that whitelist known contributors and it appears that GitHub might be allowing you to control this too.

Hey everyone, I wanted to provide an update on a critical issue affecting the open source community: the increasing volume of low-quality contributions that is creating significant operational chal...