The *real* secret of prompt engineering?

"It looks okay to me."

https://codemanship.wordpress.com/2025/05/09/the-real-secret-of-prompt-engineering/

The Real Secret of Prompt Engineering

Since early 2023, I’ve been on a journey evaluating claims about the capabilities of generative “A.I.” (yep, still gets air quotes). I’ve tried to reproduce some of the more…

Codemanship's Blog

@jasongorman I think you may have hit the proverbial nail on the head there. It’s a quality thing. If you’re happy shipping 💩, or don’t know what good looks like so don’t recognise 💩 then it’s great. You can generate steaming piles of it really quickly, which is great when you’re simply passing it on to someone else to maintain.

The modern equivalent of the person selling the used car that turns out to be a dud - and when you go back to get your money back, they’ve vanished.

@jasongorman
#GenerativeAI is based on #statistics. So it is all about #expectations

And it can meet expectations without being correct.

In addition, the cartoon is an example of 'normalization' in the sense of reader response criticism: People adapt what they see to their expectations.

@jasongorman

It is interesting looking at the cartoon.

"Yup, I followed the story and got the joke". Mostly it was fine but (and I hadn't clocked it was AI generated at this point) there was a feeling of "wrongness" and unease, then a closer look picks up all the inconsistencies.

@futurebird has commented on nauseated by AI art.

I wonder if anyone is tracking the impact it has on readers of articles with AI art, probably too subtle an effect for click counts to pick up.

@jasongorman @futurebird
@thirstybear

Is anyone vibe budgeting?

"I gave OpenAI access to all my bank accounts, I route any bills and invoices to it and let it manage my budget.

So much time saved!"

#ai

@jasongorman as you know I don't really agree LLMs are useless, overhyped absolutely.

That being said, this does match my experience, but I don't think it's a reason to discard them.

And yes sometimes it gives crap (yesterday I asked it to refactor some .. Collection classes such that it is more dry but avoiding inheritance. Guess what it made 😅

And point 2 is totally true and I'd rephrase that as advice: if the output of an llm isn't to your liking, adjust the initial prompt and try again

@jasongorman as for the "looks okay to me" that works for me as long as I dont do the looking. Recently I've noticed I prefer to let the llm agent edit either test or code (and let it run the tests and fix itself).

But though I disagree with your conclusion/sentiment I think you hit the nail on the head: how useful a llm is depends greatly on your ability to evaluate.

I'd poo my pants vibe coding in a code base without tests 🤪

@jasongorman

Kind of the secret of "software engineering" (with air quotes) too, so it alll makes sense.

@jasongorman Interestingly, I asked gpt-4o to fix the image you provided, with one prompt without telling what was wrong (actually, 2 prompts, because it first explained the changes and why and then asked for applying them) and the results "looks ok to me". 🙂

Though doing another similar prompt could lead to not the same result. So, the randomness is definitely not failsafe.

Still, the changes are impressive, and I wouldn't have been able myself to draw such a cartoon in such limited time.

@xoofx What happened to the guy in the t-shirt? And where did the hoodie guy get that coffee?
@jasongorman seriously? So, you need the coffee on the image? You can't imagine that it could be along the table outside of the image? The guy with the t-shirt? There are 4 guys, 1 guy with a suit, 2 with t-shirts sitting on a bench, 1 with a hoody, that goes back to his chair. What is really your point in the end here? trolling or?
@jasongorman
For a comic artist (and teacher) like me, that comic is just awful in so many ways!
@mullana But a *lot* of people have said "It looks okay to me". People's confidence in generative "A.I." seems to depend on how much they understand or pay attention to the output.

@jasongorman

Yes! That's what I noticed as well.
People never wonder why "A.I." only makes mistakes in fields they're familiar with and works perfectly well in all other fields. 🙄 The perfect Dunning Kruger machine!
If you don't have anything you're good at, or especially familiar with, "A.I." looks like the perfect solution for anything!

@mullana "It can't do my job, but I can totally see it taking yours."

There's a similar principle about news journalism: reports seem plausible until a story comes on that just happens to be about something you know a lot about, and you're like "No, wait, that's completely wrong!" Now what are the chances that *all* news reports are getting it wrong? 🙂

@jasongorman
When I was a kid, a famous magazine came to interview my neighbor. She worked in engineering and was married to a farmer (who was actually in landscaping and did the farming on the side)
They ran their story about the woman who manages being a (step!) mother, a farmer and an engineer at the same time and because their house didn't look enough like a farm, they placed her with a wheelbarrow full of straw before my other neighbors house.

That's when I knew you never trust the media.

@mullana I've had brushes with news media over the years for various things. I came to realise that they'd already pretty much written the story and were only talking to me to get quotes and soundbites to support their narrative.