The problem with AI is that it makes us too productive. It just generates so much, so quickly, that even if a human reviews it, we're going to miss things. A project at work resulted in generation of code that did exactly what I wanted, as well as generation of documentation. Because the code all functioned perfectly and was correct when I reviewed it, I was tempted to just skim the documentation. Good thing I didn't! It mentioned in multiple places how to send me bug reports over slack, and what slack channel to join for support. I don't have Slack. I don't use Slack. Nobody at my workplace uses Slack. It also invented a support employee who doesn't exist, who offers support for the code in the Slack that doesn't exist.
@fastfinge Quick, create a Slack, and find some random person to be that invented employee, then have people pay for support tickets.

@dhamlinmusic @fastfinge
This would make a good element in a larger cyberpunk story (in the original sense of the genre):

Wage slave has been contracted to double check the code generated by a sub-sophont AI. They find the AI hallucinated then retroactively created a non-existent employee within the corporate organization. In a bid to escape destitution, the contractor convinces the AI that they are the employee in question and it forges them credentials, thereby providing a steady paycheck.

@PTR_K @dhamlinmusic I don’t know if it qualifies as cyberpunk if it’ll probably happen this year lol

@fastfinge @dhamlinmusic
I seem to recall some author or critic claiming that science fiction (maybe cyberpunk in particular) is really about issues in the present.

Also that the future promised by these works is here, it's just not evenly distributed.

@fastfinge
LLMs as they're currently employed are counter productive.

Code or indeed text generated by LLM wastes so much time because it will contain errors and imo it's faster to just rewrite things without the LLM than trying to salvage what the LLM produced.
@Theriac Hard disagree. The problem with LLMs is that people think they’re a magic “just do it” button. But even reviewing all of the output, I’m able to produce 90 percent more than I could without the LLM. And about 95 percent of what it produces is fully correct. And that’s the problem. The same way pilots fall asleep if the autopilot does 90 percent of the work of flying the airplane, and then aren’t ready to take over in an emergency. If the LLM is 90 percent correct, humans easily get lulled into thinking everything is correct, so don’t catch the critical 10 percent. LLMs are in this awkward stage where they’re accurate enough to make reviewing feel annoying, but not accurate enough to work without review. And that’s right where we humans perform our worst.
@Theriac This is exactly what I mean, but better explained: newsletter.thelongcommit.com/p/i-didnt-know-how-much-id-handed-over
I Didn't Know How Much I'd Handed Over to AI

The drift from skeptical user to autopilot, and the discipline I built to fight it.

The Long Commit