Some people before #LLMs

"But writing #documentation is so time-consuming. We have no time for that!"

"We won't adopt #Rust as our service is not performance-critical."

----------------

The same people after LLMs:

"We need some more docs over here... and please update that SKILL.md. Otherwise, how should our LLM navigate our codebase and know what to do?"

"#RustLang is awesome! LLMs make way fewer errors now with strict types!"

Just insane!

#Society #People #AI #ArtificialIntelligence

@janriemer especially the second part should cause people to think, as it's not only the LLMs that benefit from strict compilers. Oh the irony

@MeFisto94 @janriemer

Is it really surprising though? Rust is famously hard to learn, it's not as if people had anything against it. LLM removes that barrier to entry, so suddenly it's a realistic player

@gotofritz @MeFisto94 @janriemer then how do they review the code the llm generates?

I'd say it increases the burden on the user as it wouldn't know any of the types or infra the llm produces so you have to read up on all of it immediately, instead of piece by piece and only using things you know.

@labsin @MeFisto94 @janriemer

When I do it I ask the LLM to explain it to me step by step, I look up some of the terms, I ask another LLM to review the code and find weakeness in it... and of course tests. So far it's gone well 🤷

@gotofritz @labsin @MeFisto94 @janriemer Even when skilled programmers use it this way in a field they aren't expert (asking for explanations at each step and iterating multiple times to refine and simplify the code), they can end up with things like redundant unoptimized custom cryptographic implementations and other things that may pass the tests but significantly increase the risk of security issues. Lowering the barrier is nice, but it also creates dangerous false impressions.

@tuxmain @labsin @MeFisto94 @janriemer

...which is exactly how "normal" programming languages are learned. You build something, you put it in prod, problems arise, you deal with them and in the process your expertise increases. I am advanced in a handful of languages but it didn't just happen overnight. I made naive assumptions, stupid mistakes, and learned from them. It's no different with LLMs, just faster

What LLMs give you though is the ability to have one create something and another trying to find badly optimised or insecure code. That somehow protects you from deploying dangerous code to prod. Up to a point ofc, which is why you want to have experts in the loop for the most mission critical code

@gotofritz Sorry to be so blunt, but this is like saying one can learn how to cook by ordering at a food delivery service.

Using #LLMs as a search engine or _first draft_ to get an initial _surface-level_ idea of a problem is useful to dig more deeply into a certain area.
It might also be fine to use it as fancy autocomplete.

But what these "AI" companies are doing with these coding agents is stripping away the editor layer, so "you don't look too closely".

@tuxmain @labsin @MeFisto94

@janriemer @tuxmain @labsin @MeFisto94

It all depends how you use it.

If you work on the business side and use a LLM to create a small app to fetch and massage data or a small website because "fuck developers, they always get in the way", and have zero interest in the tech behind it, then I agree.

But if you are an engineer using it to accelerate your learning then it can absolutely work. I believe Rust falls in the second category

@gotofritz
I'd say it makes more sense for small interfaces that no one gives a damn about. It often takes more effort to create those than it is worth and they mostly stay small, so if you want to redo it decent, you can still just recreate it.

I would never use it for production code if you are not aware of the typings used or, especially if there is not even a senior in the language to supervise.

@labsin

Yeah eventually it becomes a risk management issue. I also wouldn't want to significant amount of work i ln critical production code