In the future, we won't need programmers, just people who can describe to a computer precisely what they want it to do.
@jasongorman This is a common reasoning, but an interesting question is: what if that's not true? What if we could vaguely describe what we need, see a result, vaguely describe some improvements, and get by with things that are temporarily sufficient for it current needs? It sounds unattractive, but unlike current software, it would always be close to what we need _now_, as opposed to going through an 18 month laborious process of requirements gathering a development.
@mathiasverraes @jasongorman
This sounds like an argument for Agile methods. I don't think inserting AI in the process resolves the 'issues' with Agile development: lay people can't adequately describe a starting point, they're "too busy" to stay involved in the process, and they don't want to start using a new system until it's feature complete (which limits useful feedback).
@AdamDavis @jasongorman "Feature complete" is the assumption I'm trying to challenge here. No end point, just a willing assistant who more or less solves your problem today, and the next day you tell it to do "like yesterday but change this one thing" and there's never a notion of finished features.

@mathiasverraes @AdamDavis @jasongorman that "LLM-generated throwaway app" might work for simple tasks, but not for important things like pensions, passport applications, e-commerce, banking, where a high degree if correctness is required. Surely?

Or are you suggesting that people will accept approximations of bank balances, pension pots, purchases, etc ?

🤔

@matthewskelton @mathiasverraes @AdamDavis I did a short gig once for a logistics firm. They'd paid 2 programmers a fixed price to develop a traffic management system for a client contract, and it had gone badly. Their monthly invoices were about 5% out. Which was about £25K a month.

The manager I dealt with actually said to me that "as long as it was 90%+ accurate, that was fine.

They hadn't realised their client had developed a reconciliation system in Oracle to check the invoices.

@matthewskelton @AdamDavis @jasongorman I'm not saying that all goes away. I'm just trying to stretch my imagination and see if there might be other forms of software that don't exist yet, for problems that are not quantifiable like bank statements.
@matthewskelton @AdamDavis @jasongorman An example: "Hey computer, put out a job ad and hire 5 line cooks". A month later: "Hey computer, hire 5 more, but make sure they are polite to customers". "Now hire a cook who's good with salads".
The user described a fluid set of criteria, but there's never a notion of accuracy or completeness. Payroll etc is of course still handled by a more traditional system.

@mathiasverraes @AdamDavis @jasongorman that makes sense to me: ad-hoc tasks that are not (yet) a consistent ongoing need.

But this also suggests to me a kind of backwards step: people just using GenAI codegen like a human. It's effectively not repeatable and so then like a step backwards towards unspecified, random needs from business people without the clarity that comes from actually industralising the activities.

🤷🏼‍♀️

@matthewskelton @mathiasverraes @AdamDavis @jasongorman I was thinking of something similar to post-for-profit OpenAI news. Is a route to profitability to act as middleware that translates instructions into actions for third parties? For example, job descriptions for hiring or something more complicated for, say, wix.com? A kind of NLP -> command layer with added value with scheduling, feedback, suggestions, status updates, etc., depending on the richness of the underlying service provider.
@ed_blackburn @mathiasverraes @AdamDavis @jasongorman kind of like codifying human input into specifications that can be usefully executed by an LLM?
@matthewskelton @mathiasverraes @AdamDavis @jasongorman But one can envisage a more comprehensive offering for richer products. UX can be asynchronous and synchronous. For example, here's an update on your hiring request, if you tweak this requirement you'll have a larger pool, do you have time to triangulate? The interface could be a prompt or even via Siri for example. A lot I would imagine would be driven by the business models of the LLM-as-a-service. Less demo-ware; more value.
@matthewskelton @mathiasverraes @AdamDavis @jasongorman Expanding TAM by lowering barriers to I18n and improving accessibility. Commoditising integrations will make it easier to add more, with players like OpenAI potentially setting standards. While many GenAI use cases aren't convincing, I recognise its potential for accessibility, improving UX, and acting as a multiplier. Realising this value will require substantial design research and experimentation, but the opportunity is evident.
@matthewskelton Of course, at the time, I didn’t realise what I’d described. We now have async flows: agents and a protocol for LLMs to communicate: MCP. This enables products to use LLMs as an implementation detail. In other words, a good product will survive, in contrast to the tech bro LLM scramble, which is just a waste of capital and time because it doesn't solve a problem people have.
@matthewskelton @AdamDavis @jasongorman We're getting somewhere now 😀
Step 1: Random unspecified needs. Step 2: Traditional SDLC. Step 3: Profit!
Step 2 is where IT forces clarity, and business would rather skip it (and has always tried to skip it). Programmers assume AI will make step 2 faster. I'm speculating there could be a category of problems that benefits from not having a step 2.

@mathiasverraes @AdamDavis @jasongorman yep... And actually GenAI may provide a useful option where the business can skip the actual thinking... If they need something truly one-off, but with the risk that the business is creating a mountain of unfathomable dross code that slows things down later.

GenAI is going to create jobs, not take jobs in IT.

2030: "Dross-clearing Engineer wanted to clear up GenAI code" 🧹

@matthewskelton @mathiasverraes @AdamDavis @jasongorman so we're back to GenAI has replaced the juniors which we are now not training anymore to become senior ?

@krisbuytaert @mathiasverraes @AdamDavis @jasongorman hmm, well we're not training many juniors in Assembly or vanilla C language any more, but instead in higher level languages.

I expect to see some formalisation of how to constrain an LLM to generate useful code, so maybe that becomes the new "coding" ? Basically, prompt engineering v2 or v3?

@matthewskelton @krisbuytaert @mathiasverraes @AdamDavis The problem is that the output from LLMs is non-deterministic, so it doesn't really matter how precise the prompt is.

@jasongorman @krisbuytaert @mathiasverraes @AdamDavis sure but going back to Mathias example, some constraints might be "good enough" even with non-determinism.

And maybe that non-deterministic output is sometimes a feature, not a bug? Or at least marketed as such. "Personalised" "Unique" etc. 🤷🏼‍♀️

@mathiasverraes @matthewskelton @AdamDavis @jasongorman

I might be mistaken, but it started as a beautiful joke.

@mathiasverraes @matthewskelton @AdamDavis @jasongorman a year later: five unsuccessful job candidates are suing you alleging discrimination and you are unable to explain on what criteria you selected the successful candidate because the AI's reasoning(sic) is opaque
@dan @jasongorman @AdamDavis @matthewskelton I'm not saying it's the future I want, I'm just speculating about possible futures 🤷‍♂️