Okay, so here's a bit of AI on this usually AI-less account. I read this announcement from Shopify about their new AI tool. It comes right out of the door with an insight:

> Shopify’s Augmented Engineering DX team tackled developer productivity challenges like flaky tests and low test coverage using AI agents. **They discovered that breaking complex tasks into discrete steps was key to reliable AI performance**, leading to the creation and open-sourcing of Roast, a tool designed to structure AI workflows effectively.

(emphasis mine)

I have 2 questions:
1) What did their developers do before? Not break down complex tasks into discrete steps?
2) That actually introduces a major loss for the whole AI story: that is, in many organisations, the largest task that programmers need to do.
3) Even just the workflow control they present is something that ops engineers are being paid full time for (Senior YAML manglers ;)), we already have forms on how to have computers predictably do things.

Like, I don't even want to talk away the novelty here, but this is not the 10x improvement people want, at the cost of a huge, expensive and unpredictable component in your system.

Choose wisely.

https://shopify.engineering/introducing-roast

Introducing Roast: Structured AI Workflows Made Easy (2025) - Shopify

Roast is a convention-oriented workflow orchestration framework designed specifically for creating structured AI workflows that interleave AI prompting with normal non-AI execution. It provides a declarative approach primarily using YAML configuration and markdown prompts, giving AI agents the guardrails they need to solve developer productivity problems at scale.

Shopify
@skade nice tool they built to solve problems they wouldn’t have without AI 🤷‍♀️

@janl Very much my conclusion. I always try to keep a professional "maybe there's something I'm missing", but boiled down, this is YAML workflows with unpredictable execution. Huh?

I mean, this is what everyone around me reports: until I configured it, and then find all the places where it failed and broke, they have spent as much time as just a straight-forward coding session.

@skade looks like they took what used to be a simple CI script and made it more expensive to run with “AI”.

“Analyze this directory for code quality issues” is literally just “rubocop $dirname”.

@skade and “generate unit tests for all public methods” assumes that the implementation of those methods is already correct, which is entirely counter to the idea of unit tests.

@skade "What did their developers do before? Not break down complex tasks into discrete steps?"

I have seen a lot of AI posts where problems are solved that already had different solutions, but those solutions weren’t taken. So I wouldn’t be surprised if people didn’t do sensible things before

@skade fwiw I was ready to mock this (Tobi is an evil monster) but it does look like a significantly better experience than some similar tools we’ve been trying, so I immediately forwarded to my team (after seeing your post)

AIs are horrible at writing code (better at debugging) but this seems slightly less horrible than some alternatives, and there are some (very very narrow) case where it could come in handy.

Truly we live in hell and I hope this hype wave ends soon.

@skade There's something to keep in mind here: this also means that they're okay with "degrading" the actual work just a little bit, because it will, at some point, allow them to fire a big part of their IT department, ensuring maximum profit gain.

It's never been about innovation or improving productivity or such.

It's always been about maximizing profits for the class that owns the means of production.

@skade I believe I can answer these questions next time we meet. Tl'dr what they're saying makes a lot of sense
@skade "It's like having a junior developer who can handle the parts you haven't figured out yet"
That, uh, doesn't sound like a good idea.