Some people before #LLMs

"But writing #documentation is so time-consuming. We have no time for that!"

"We won't adopt #Rust as our service is not performance-critical."

----------------

The same people after LLMs:

"We need some more docs over here... and please update that SKILL.md. Otherwise, how should our LLM navigate our codebase and know what to do?"

"#RustLang is awesome! LLMs make way fewer errors now with strict types!"

Just insane!

#Society #People #AI #ArtificialIntelligence

@janriemer especially the second part should cause people to think, as it's not only the LLMs that benefit from strict compilers. Oh the irony

@MeFisto94 @janriemer

Is it really surprising though? Rust is famously hard to learn, it's not as if people had anything against it. LLM removes that barrier to entry, so suddenly it's a realistic player

@gotofritz @MeFisto94 @janriemer then how do they review the code the llm generates?

I'd say it increases the burden on the user as it wouldn't know any of the types or infra the llm produces so you have to read up on all of it immediately, instead of piece by piece and only using things you know.

@labsin Yeah, it's basic law of physics:
1. LLMs trying to predict more and more tokens while their context gets fuzzier and fuzzier
2. Entropy increases exponentially
3. Number of bugs increases exponentially

@gotofritz @MeFisto94

@janriemer @labsin @MeFisto94

I don't understand why the context would get fuzzier and fuzzier

@gotofritz Yeah, sorry, that was badly phrased.

To be more precise: Quality of LLM output decreases the more context it needs to process - it is called Context Rot:

https://research.trychroma.com/context-rot

So for your weekend project, LLM output might be enough quality-wise. But as soon as you deal with enterprise apps with >100.000 LoC, all bets are off!

@labsin @MeFisto94

Context Rot: How Increasing Input Tokens Impacts LLM Performance

@janriemer @labsin @MeFisto94

oh _that_.... but it is well known. That's why you work on small modules at the time and clear the context regularly. I don't really see a problem there