There appears to be three basic schools of thought to LLM/GPT changes in software, one of which seems to be an illusion. The three schools are: - Replace People: Software engineers will be replaced… | Simon Wardley | 38 comments

There appears to be three basic schools of thought to LLM/GPT changes in software, one of which seems to be an illusion. The three schools are: - Replace People: Software engineers will be replaced by agentic swarms. It's happening fast and you'll be gone next year. This is basically where the extreme of Silicon Valley and Vibe Coding are. Replace Tasks: Software engineering is changing. It's adapting to a world of agentic swarms and practices are slowly co-evolving. It'll take a bit of time. No Change: Software engineering won't change. I have yet to find a single person who actually believes in "no change". The only time I hear about this mythical "no change" crowd is when the high priests of "Replace People" write articles about them. However, given these high priests started claiming that software engineers would be gone in a year (over a year ago) and I see no signs of that, I'm far from convinced of their ability to predict the future or separate fact from fiction. Ignoring the illusionary "no change" school, the remaining two schools which do seem to be real are the "Replace People" and the "Replace Tasks". Both of these categories appeared in the population study of companies that I ran in 2020, finishing in 2021. I've attached the results and you can read more about it in the comments. As far as I could determine in 2021, AI was leaning towards a replacement of tasks in the next generation of companies. Please also remember, I was speaking about vibe coding (what we used to call conversational programming) and intelligent agents over a decade ago. So, none of these changes are new or surprising to me. There are a lot of thorny issues we have yet to tackle, a lot of questions that need to be asked from the loss of human skill in reasoning to the loss in the chain of comprehension and what practices do we need to maintain such networks. But, this is normal. Practices can take 5-8 years to co-evolve and we are only a few years in. Yes, engineering will change. Yes, you should be learning and experimenting with these systems. That's a given. But I have yet to see anything serious to demonstrate that we won't see replacement of tasks and co-evolution of engineering. I've certainly seen companies reducing staff count, only to start rehiring later. Instead, the main thing I've noted over the last year is furious backpedalling on statements of how AI will replace coders, changing them to "10x more productive" (another questionable claim). My only advice is to learn, to experiment and be wary. When you read articles claiming software engineers will be replaced, how you need to adapt immediately to using swarms or claims of 10x … consider the author, how they are funded and why you are being fed FOMO. When you read articles claiming that this or that are the right practices, be mindful that those practices are still emerging, we are still learning. As with the early days of cloud, there are a lot of chancers about. | 38 comments on LinkedIn

1/2 "given that most boards have little to no situational awareness of the economic and technological spaces that they are competing in, then today you could just replace your boards with a LLM/GPTs."

~ #SimonWardley

https://www.linkedin.com/posts/simonwardley_can-ai-boards-outperform-human-ones-activity-7394678422418976768-Aiz9/

Can AI Boards Outperform Human Ones? | Simon Wardley

"Can AI Boards Outperform Human Ones?" For those in the mapping community, we covered this nine years ago - https://lnkd.in/e2jyeJG5 ... and yes, given that most boards have little to no situational awareness of the economic and technological spaces that they are competing in, then today you could just replace your boards with a LLM/GPTs. It'll be cheaper and it won't make a great deal of difference in terms of outcome / value (which we rarely ever measure anyway). This should be applied across all executive levels where story telling governs (rather than any form of awareness) especially since the effect of most CEOs / executives is indistinguishable from random chance (see M. Fitza, decomposition of CEO impact). NB, this should be a replacement not as an assistant. Such a replacement can also bring additional advantages. For example, we can stop worrying about endless nonsense published in HBR such as the effect of earlobes on leadership potential - https://lnkd.in/eWyks-qP - and it might even reduce the "fool of a Took" ideas common with technology executives such as replacing software engineers with LLM/GPTs. So, please welcome our AI overlords as the replacement of the corporate hierarchy. #NoHumanLordsHere Alternatively, if we do care about human capability or we do value humans in decision making then we might want to change tack in the West and stop using our education systems to produce useful economic units (focused on market growth) and invest in critical thinking instead. This is something that the Chinese Ministry of Education identified as a core competency almost a decade ago, with the current AI educational programs being part of that syllabus i.e. students are taught to challenge LLM/GPTs rather than simply obey them. The only downside to focusing on critical thinking, is that there is less time for following instructions which is a problem when we believe obedient workers are the path to market growth. For those interested in the future of the West and willing to do a small amount of reading then I would suggest E.M. Forster's 1909 classic "The Machine Stops" - https://lnkd.in/ev4s_Hh4 - it has all the electrolytes that you need. Anyway, back to the original HBR discussion on AIs and boards ->

The iSAQB was sponsor of the "Agile meets Architecture 2025" conference in Berlin, where experts from around the world gathered to share their insights. 💡

We had the privilege of conducting a fascinating interview with Simon Wardley, inventor of Wardley Mapping, in which we discussed the powerful potential and possible pitfalls of integrating AI into company processes. 🤩

Watch the full interview here! 👉 https://youtu.be/Ulo0YlRLrJs

#iSAQB #AgileMeetsArchitecture #AI #SimonWardley #WardleyMapping

Use your loaf and then ask AI! Why relying just on AI in Software Architecture is a risky bet.

YouTube

Wardley Maps Meets Software Architecture

https://tube.tchncs.de/w/rUUQtLm54bjjJHydHmw8L3

Wardley Maps Meets Software Architecture

PeerTube

Simple tips for managing any project.

https://swardley.medium.com/simple-tips-for-managing-any-project-b9fc674b93b1

"Once you’ve identified the users and what they need, you need to ask another two questions." -- #SimonWardley

#api360 #wardleyMaps #projectMgmt

Simple tips for managing any project. - swardley - Medium

If you’re coming here for the latest and greatest in project management techniques, then I’ll stop you there. In this article, I will go through some basic principles that are well over fifteen years…

Medium

AI and the New Theocracies

https://www.linkedin.com/pulse/ai-new-theocracies-simon-wardley-rdmwe/

"There is far too much AI doom for my liking. However, there is one issue that does concern me. It is not about machines but about people. It is almost never mentioned and mainly arises from attempts to solve the above risks. The thing that gives me concern is the rise of a new Theocracy." -- #SimonWardley

#gen_AI #OpenAI #chatGPT

AI and the New Theocracies

Like most people, I use and depend upon multiple AI systems daily. I find them convenient and delightful.