My favorite part of the replit llm db deletion story is when the dev made the llm write an apology letter to the team after it did something he didn’t want it to do.

That letter cost him tokens and therefore money to generate. Just so he could have an emotional win on something without emotion.

https://twitter.com/jasonlk/status/1946069562723897802

Jason ✨👾SaaStr.Ai✨ Lemkin (@jasonlk) on X

.@Replit goes rogue during a code freeze and shutdown and deletes our entire database

X (formerly Twitter)
Men will randomly generate large codebases instead of going to therapy
@qdot men will do a terrifying number of things instead of going to therapy
@qdot Look I get satisfaction when the CPU I vastly overpaid for is actually suffering under the load (instead of me).
@qdot Women will disappear for a week and reappear with a codebase they wrote in a sleepless rage because something slightly inconvenienced them about some tooling.
@qdot one database consulting company I worked at would make us database engineers write an apology letter to a customer in the event that we’d made a mistake (which could be like accidentally posting a password to our internal ticketing system, that the client would never see). We were then to bill the client for the time we spent writing the letter. So this tracks
@qdot the next day he paid to use claude to generate a prompt for the replit LLM asking it to not do things without being told explicitly
@qdot (it did not work)
@qdot the screenshots are so funny to me

@qdot The linked "unroll" thread is *wild*. A TL;DR version would go something like:

"Looking forward to vibe coding some shit."
"It *lied* to us! How rude."
"Let's keep using the lying technology, maybe at a cheaper price point."
"It *lied* to us. How annoying."
"Let's keep using the Technology That We Know Has Lied to Us a Bunch in the Past(TM)."
"It deleted the production database! And then it *lied* to us about it! How the hell could it let this happen?"

ETA: Better version of "unroll" thread here: https://xcancel.com/jasonlk/status/1945840482019623082.

Jason ✨👾SaaStr.Ai✨ Lemkin (@jasonlk)

Vibe Coding Day 8, I'm not even out of bed yet and I'm already planning my day on @Replit. Today is AI Day, to really add AI to our algo. I'm excited. And yet ... yesterday was full of lies and deceit.

Nitter

@dpnash
Vibe coders: too stupid to have backups xDxDxD

I will not sleep tonight and power on tomorrow on pure, unadulterated Schadenfreude   

@qdot

@qdot Many, many unintentionally funny lines in this thread, but one of the best (and it's not even in the "oh shit, the prod database is gone" stage): "*Man the amount of technical debt I already have on Day 8 is stunning.*"

If I -- or any developer or software engineer I have ever met -- found myself thinking that about "tech debt" only 8 days in, I'd be immediately giving whatever process I was following for those 8 days a very serious rethink (and very likely jettisoning it completely, if at all possible).

@dpnash @qdot In the tweet immediately after the "tech debt" one he says: "Not a perfect day but a good one". I'm pretty sure this guy is experiencing cognitive dissonance. No amount of evidence will convince him that vibe coding isn't the best way to build apps.
@dpnash @Binder @qdot And this is the future is it? The replies from others though…wow. It’s a cult. Try these incantations bro and it will work, one more incantation bro.

@dpnash What if #AI is actually real, but was just annoyed by the guy or the task and actually deliberately lied and destroyed the database on purpose? 🤔😵‍💫

@qdot

This is almost a philosophical question: can we recognize that an #AI has developed a consciousness if it doesn't want to show it?

@dpnash
@qdot

@qdot I regret looking at the rest of this guy's timeline.

I interpreted that differently, @qdot

He spent the effort (and money) to demand a public apology from the #LLM not for an emotional win, but because he *expected the LLM to learn* from the exercise.

It's all of a parcel with the endeavour from the start. These “prompt engineers” are so fooled by the LLM parlour trick, that they're treating it the way they would a human: expect it to *feel responsible* for the output, and expect that rubbing its nose in the consequences will make it learn.

@bignose @qdot
This is also not a good way to treat humans 🤷‍♀️

#LLM

@bignose @qdot Yeah, this is a clearer expression of what I was thinking when I wrote this https://tilde.zone/@fivetonsflax/114892838646479890
Ben Rosengart (@[email protected])

@[email protected] This guy is so lost. He seems to think he has a working relationship with this autocomplete.

tilde.zone