In the future, we won't need programmers, just people who can describe to a computer precisely what they want it to do.
@jasongorman This is a common reasoning, but an interesting question is: what if that's not true? What if we could vaguely describe what we need, see a result, vaguely describe some improvements, and get by with things that are temporarily sufficient for it current needs? It sounds unattractive, but unlike current software, it would always be close to what we need _now_, as opposed to going through an 18 month laborious process of requirements gathering a development.
@mathiasverraes @jasongorman
This sounds like an argument for Agile methods. I don't think inserting AI in the process resolves the 'issues' with Agile development: lay people can't adequately describe a starting point, they're "too busy" to stay involved in the process, and they don't want to start using a new system until it's feature complete (which limits useful feedback).
@AdamDavis @jasongorman "Feature complete" is the assumption I'm trying to challenge here. No end point, just a willing assistant who more or less solves your problem today, and the next day you tell it to do "like yesterday but change this one thing" and there's never a notion of finished features.

@mathiasverraes @AdamDavis @jasongorman that "LLM-generated throwaway app" might work for simple tasks, but not for important things like pensions, passport applications, e-commerce, banking, where a high degree if correctness is required. Surely?

Or are you suggesting that people will accept approximations of bank balances, pension pots, purchases, etc ?

🤔

@matthewskelton @AdamDavis @jasongorman I'm not saying that all goes away. I'm just trying to stretch my imagination and see if there might be other forms of software that don't exist yet, for problems that are not quantifiable like bank statements.
@matthewskelton @AdamDavis @jasongorman An example: "Hey computer, put out a job ad and hire 5 line cooks". A month later: "Hey computer, hire 5 more, but make sure they are polite to customers". "Now hire a cook who's good with salads".
The user described a fluid set of criteria, but there's never a notion of accuracy or completeness. Payroll etc is of course still handled by a more traditional system.

@mathiasverraes @AdamDavis @jasongorman that makes sense to me: ad-hoc tasks that are not (yet) a consistent ongoing need.

But this also suggests to me a kind of backwards step: people just using GenAI codegen like a human. It's effectively not repeatable and so then like a step backwards towards unspecified, random needs from business people without the clarity that comes from actually industralising the activities.

🤷🏼‍♀️

@matthewskelton @mathiasverraes @AdamDavis @jasongorman I was thinking of something similar to post-for-profit OpenAI news. Is a route to profitability to act as middleware that translates instructions into actions for third parties? For example, job descriptions for hiring or something more complicated for, say, wix.com? A kind of NLP -> command layer with added value with scheduling, feedback, suggestions, status updates, etc., depending on the richness of the underlying service provider.
@ed_blackburn @mathiasverraes @AdamDavis @jasongorman kind of like codifying human input into specifications that can be usefully executed by an LLM?
@matthewskelton @mathiasverraes @AdamDavis @jasongorman But one can envisage a more comprehensive offering for richer products. UX can be asynchronous and synchronous. For example, here's an update on your hiring request, if you tweak this requirement you'll have a larger pool, do you have time to triangulate? The interface could be a prompt or even via Siri for example. A lot I would imagine would be driven by the business models of the LLM-as-a-service. Less demo-ware; more value.
@matthewskelton @mathiasverraes @AdamDavis @jasongorman Expanding TAM by lowering barriers to I18n and improving accessibility. Commoditising integrations will make it easier to add more, with players like OpenAI potentially setting standards. While many GenAI use cases aren't convincing, I recognise its potential for accessibility, improving UX, and acting as a multiplier. Realising this value will require substantial design research and experimentation, but the opportunity is evident.
@matthewskelton Of course, at the time, I didn’t realise what I’d described. We now have async flows: agents and a protocol for LLMs to communicate: MCP. This enables products to use LLMs as an implementation detail. In other words, a good product will survive, in contrast to the tech bro LLM scramble, which is just a waste of capital and time because it doesn't solve a problem people have.