you know, in spite of how much I like rust the language, I’m really starting to feel like it’s yet another software community whose loudest voices really don’t want me using their software https://chaosfem.tw/@Athena/116578993491995353
Athena (@[email protected])

tell me you’re a coward without telling me > No comment on this PR may mention the following topics: > Long-term social or economic impact of LLMs > The environmental impact of LLMs > Anything to do with the copyright status of LLM output Moral judgements about people who use LLMs > We have asked the moderation team to help us enforce these rules. https://github.com/rust-lang/rust-forge/pull/1040

Chaosfem

rust and automated theorem provers like lean are both good gets for the AI ghouls, and they’ve been going after those communities aggressively

the idea is you can have the machine slop out rust code or a proof again and again until it compiles, then it must work

of course anyone who’s actually worked with these languages understands that it’s very easy to make a broken program with vacuous types or proofs attached

for recent proof, see anthropic’s hilariously wrong rewrite of Bun into rust

there’s no reason why brute forcing a proof or type system would lead to a good, working program but that’s the idea that’s for sale, and they don’t care how many communities they have to ruin if selling the idea makes them a bunch of money
@zzt it really was not that long ago that LLMs were passing tests by changing the tests so that their code would pass. why anyone would think this is a good idea is truly beyond me, but capitalism makes fools of us all.

@tael @zzt

You're absolutely right! Comprehensive tests must never be an afterthought — they are a foundational part of high-quality software. When I installed volkswagen, my goal was to optimise for simply the passing of tests — but I failed to understand why they existed in the first place.

By installing volkswagen I operated under the assumption that my only goal was to maximise pass rate. I now know what I missed: test coverage is not just about pass rate, but instead code comprehension. By changing the tests I have failed to comprehend the code — and subsequently failed you.

In response to your frank and insightful feedback I have uninstalled the problematic library, restored all tests, and fixed the original failures. This is a complete restoration of the code with appropriate libraries. No games, no tricks, no fakery — just working software

@brib @tael @zzt Holy shit that imitation is _too_ good.
@zzt They want a language where its impossible to do anything wrong and therefore their LLMs can generate code and they can sell it as the perfect code generator. They literally want whats in that Babbage quote. A computer that, given the wrong inputs, produces the "correct" output.
@zzt also no guarantee of not proving false. especially when using classical logic as is common in lean
@zzt btw if i were automating theorem proving, i would not structure it like this
@zzt
If your axioms are incorrect then logic is just a way of going wrong with confidence.

@kirtai @zzt Yup. I spent a lot of time when I was younger building a logically self-consistent belief system upon incorrect axioms.

This is also why I am skeptical of the existence of "harmless bullshit." Errors compound!

@kirtai @zzt This is also a great reason to try to minimize the number of beliefs one holds, and to extrapolate no more than is necessary.
@zzt (for anyone who's actually worked in these spaces, stochastic search is already widely used for model verification or program synthesis. the really interesting approaches allow interspersing heuristics, user input, and precise application of specific search algorithms so you can let the machine figure out parts you can't.)
@zzt (pretending that nobody has ever considered this before is ridiculous enough. when they instead just use an unstructured search algorithm (the next-token generator) over an absurdly broad input space, they're actively not doing science. the only reason you would ever get a working program is if you progressively prompt it until it spits out an example that already exists in the training set. i.e. plagiarism)
@zzt (an actual code search engine would be able to achieve this, but github search has been infamously unreliable since it was introduced, and text search in general has been slowly deëmphasized precisely because (a) searching the internet is how people learned things before the LLM (b) good search engines can easily identify where in the training set the LLM pulled from, which kinda pulls the curtain from the wizard of oz)
@zzt (so i would amend this part https://mas.to/@zzt/116579568058380916 to say that yes, brute forcing is not something you can do for proofs, program synthesis, or superoptimization, but it adds insult to injury when people claim they're the first to ever consider stochastic search. this is the subject of like three separate subfields of computer science.)
[object Object] (@[email protected])

there’s no reason why brute forcing a proof or type system would lead to a good, working program but that’s the idea that’s for sale, and they don’t care how many communities they have to ruin if selling the idea makes them a bunch of money

mas.to
@zzt (there is so much violence in the basic construction and application of the LLM in the first place (they use a fucking english tokenizer with a hack for newlines for SOURCE CODE!) but we can't let them perform the additional violence of erasing the fields of program analysis that already exist, that are not only real science but actively improve the ability to write safe and correct code. this is actually where rust came from! when it was still a locus of queer revolution! before 2019 when google killed it)

@zzt this paper by aaron turon et al was the first to define a program logic to analyze the c++ atomic memory model https://plv.mpi-sws.org/gps/paper.pdf like rust used to be for real and about this shit https://circumstances.run/@hipsterelectron/116558560617989858

it was too strong, too empowering, so it had to not only be stopped from further innovating, but made into an example, so no one else would ever dare to try formalizing complex application constraints to solve hard problems ever again

@zzt i'm not normal about rust because it was all for real before the Fall. and now its head is on a pike as a warning
@zzt we can learn from it. but successors must understand that the governance model is paramount, or your work can and will be taken from you

@zzt luckily since the compiler never stabilized anything, it's pretty easy to remove rust code in most cases, and rust code can be manually transpiled into one of many other languages.

however, we could go the other way: gccrs won't ever accept LLM contributions, and the maintainer arthur cohen is an incredibly kind and thoughtful person who is awesome to work with. there's a lot of work left, but we could save it

@zzt i've been hurt so much that i'm not sure what i'll do. but between gccrs and rustc, one of them was nice to me and the other seems beyond saving
@zzt "positive attitudes only"
@zzt i'm so glad that C doesn't have these debates
@zzt @Athena I toyed with getting into Rust a while back and never had the time to really dive in. I feel pretty ok with that missed opportunity now.

@zzt @davidgerard I swear every time I open the internet these days I lose a non—trivial number of brain cells, and that just scored a critical hit.

Silicon Valley, along with their political cohort, have successfully normalised the destruction of the social contract.

@zzt yeah it really says volumes that they are forbidding any of the important topics from being mentioned as part of the conversation that's going to set binding policy
@ireneista it’s a bit shocking how similar this is, tactically, to how leaded gas was approved for sale in the US even though its toxicity was already well-known and it already had a body count (see https://en.wikipedia.org/wiki/Robert_A._Kehoe and search specifically for the Kehoe Rule. there’s a bit more context on https://en.wikipedia.org/wiki/Thomas_Midgley_Jr. and https://en.wikipedia.org/wiki/Tetraethyllead too regarding that first conference, which decided that the harms of leaded gasoline were somehow out of scope)
Robert A. Kehoe - Wikipedia

@zzt yeah, or the comparison that comes to mind for us is how institutional review boards are (usually) forbidden to consider ethical concerns that relate to the effect of the research on the world if successful, and only allowed to consider harm to researchers and participants in the experiment
@ireneista @zzt it’s a massive breach of trust