Someone decided to get an LLM to create a website summarising a bunch of libraries in the Rust compiler development ecosystem, and the whole thing is chock full of inaccuracies, misinformation, or entirely hallucinated features that don't actually exist.

It rated my parser library as slower than one that runs several orders of magnitude slower on every benchmark I've seen.

For my diagnostic library, it hallucinated a feature that the library doesn't even have.

I have spent so many hundreds of hours curating the docs and examples for these projects. Why do people do silly crap like this?

More and more I think we need strong social norms against using LLMs in 'write mode'.

When they're used in read mode, the only person they're misinforming is you, the twit that decided to use one. In write mode? They've got the ability to disseminate false knowledge to others, an actively hostile and anti-social act. This should be seen as akin to littering or fly-tipping.

At the absolute minimum you have the social responsibility to admit that the artifact you brought into creation was done so with a language model to minimise the fallout to others. Ideally, don't publish it at all.

@jsbarretto wait, are saying Tronald Dump is an LLM?
It all makes sense now!
@PaulaMaddox None of his training material contained punctuation, you see