| website | https://www.puercopop.com/ |
| sourcehut | https://sr.ht/~puercopop/ |
| mod de | https://piefed.social/c/peru |
| website | https://www.puercopop.com/ |
| sourcehut | https://sr.ht/~puercopop/ |
| mod de | https://piefed.social/c/peru |
I wrote a blog post announcing the new GUI for Kap.
People keep assuring me that LLMs writing code is a revolution, that as long as we maintain sound engineering practices and tight code review they're actually extruding code fit for purpose in a fraction of the time it would take a human.
And every damned time, every damned time any of that code surfaces, like Anthropic's flagship offering just did, somehow it's exactly the pile of steaming technical debt and fifteen year old Stack Overflow snippets we were assured your careful oversight made sure it isn't.
Can someone please explain this to me? Is everyone but you simply prompting it wrong?
It's a good thing programmers aren't susceptible to hubris in any way, or this would have been so much worse.
RE: https://infosec.exchange/@0xabad1dea/116308891820413082
They literally vibe coded their way straight to a single 9. Amazing
today on how to tell a king his plan is stupid in the Warring States:
(The King of Wei wanted to attack the capital of a rival nation. One of his citizens said to him: ) So, a funny thing happened on my way here. I was at the Great Crossing, and I saw a guy heading due north. He said to me: "I'm going to Chu!"
I said to him, "Sir, if you mean to make for the southern kingdom of Chu, how are you going to get there heading north?"
He said, "I've got a good horse." Sir, no matter how good your horse is, this isn't the road to Chu.
He said, "I can spend whatever it costs." Sir, no amount of money or means can make this the road to Chu.
He said, "My driver's great." Yeah, great at driving you further and further away from Chu! This guy, he really needed to TURN AROUND.
original text: https://ctext.org/pre-qin-and-han?searchu=%E7%8A%B9%E8%87%B3%E6%A5%9A
For the 1,000th time: "AI" does not have agency and cannot think and cannot act.
Chatbots cannot "evade safeguards" or "destroy things" or "ignore instructions".
They do literally only one thing and one thing only: string tokens together based on statistics of proximity of tokens in a data corpus.
If you attribute any deeper meaning to this, it's a sign of psychosis and you should absolutely never use chatbots, possibly you should even touch grass.