All the devs saying that Anthropic’s code quality is “normal” are telling on themselves and everybody they’ve worked with
(Also supports what many have been saying about software quality being a crisis that precedes LLMs, but that’s another story)
All the devs saying that Anthropic’s code quality is “normal” are telling on themselves and everybody they’ve worked with
(Also supports what many have been saying about software quality being a crisis that precedes LLMs, but that’s another story)
@baldur Make software vendors (and their C-suite) financially liable for damages caused by their product, and their C-suite criminally liable if human death occurs because of en error.
Minimum 10 year sentence for C-suite for abuse, or leaking, of user private data.
I think we will all have a more secure, stable, and financially prosperous life because of this.
@baldur I expect complaints against my proposal from people saying "regulation is bad" and "Things will be more expensive."
I hope they move someplace where agriculture, pharmaceuticals, and medicine have no regulations. Where there is not torts law.
We will see how that goes.
@rrb @baldur Regulation is good. More often than not paid for in blood.
I see devs who advocate use of AI as intellectually lazy and long term dumb.
It's not about pride in craftsmanship. For that you need to develop said craftsmanship. If you dont know how to do basics and how your tech stack works, using AI is just a terrible idea. Not that it's good idea for anything mind you.
That's before we get to erosion of skill and junior devs learning bad habits.
@thereverend4253 @baldur I did see one AI application recently that I found worthwhile. Apparently Amazon is integrating AI into AWS system management. This has resulted in a number of major service outages.
I think that is an excellent application of the technology.
I hope that Meta, OpenAI, Oracle, and MicroSlop follow Amazon's lead.