The state of AI

At the end of his ground-breaking novel The Shockwave Rider, John Brunner writes:

“There are two kinds of fools. One says, ‘This is old, and therefore good.’ And one says, ‘This is new, and therefore better.’”

This captures the current discourse around AI remarkably well.

On one side, we see an influx of investment, personnel, and strategic focus that seems, at times, detached from economic fundamentals.

On the other, there is a rejection of the technology that goes beyond skepticism and occasionally borders on the ideological, often accompanied by claims of a uniquely human superiority that is treated as self-evident rather than examined.

This text is an attempt to step outside both positions. The goal is not to defend or attack AI, but to understand it: to explore why it is simultaneously glorified and vilified, and what a more realistic trajectory might look like.

There is, of course, much more to AI than its generative capabilities. But much of the current discourse collapses these distinctions. For the sake of consistency, this text will do the same.

1/9

Technology: The state of AI

AI is an interesting phenomenon: It is praised as ultimate solution and evil at the same time. This text is an attempt to step outside both positions. The goal is not to defend or attack AI, but to understand it: to explore why it is simultaneously glorified and vilified, and what a more realistic trajectory might look like.

Literarily Starved

The curse of complexity

To explain the fascination with AI, we need first to understand what I call "the curse of complexity" of human societies.

It is a well-known phenomenon that most societies build up their inherent complexity faster than they develop their capabilities to manage it. Why is that?

  • For society to work, it needs to shift from direct conflicts to institutionalization. This implies laws, regulations, contracts and governmental bodies.
  • But every resolved conflict by institutions leaves a residue. So we add exception rules, escalation paths and arbitration layers. Every conflict resolution adds "state" to the system.
  • Fairness creates complexity faster than efficiency removes it. Everyone says “Do the simple thing”, but in reality they want “Handle every edge case so no one screams”. So systems evolve toward inclusiveness, auditability and defensibility. But as a result the system becomes harder to understand.
  • The resulting mistrust is a complexity multiplier. Everything must now be documented, verifiable and approved. Therefore we add controls, compliance layers and reporting structures.
  • The increasing specialization fragments conflict resolution. As expertise deepens, no single actor sees the full conflict landscape. Each domain can only optimize locally. The system becomes a permanent negotiation machine.
  • This creates a path dependence: you can’t roll back conflict. Once a compromise is encoded. Removing it would reopen the original conflict. Which is often perceived to be more expensive than living with complexity. This creates layers of legacy.
  • In the end we have a hidden negative feedback loop. More complexity leads to more misunderstandings. This creates new conflicts which leads to more rules which produces more complexity.
  • People rarely experience themselves as complex. Only others are. And so AI becomes attractive not because it reduces complexity, but because it promises to manage those others.

    2/9

    AI as a complexity management tool

    If someone works in a high level decision making role, the complexity manifests typically in multiple forms:

  • Overwhelming input because of too much information, too many stakeholders and too little time.
  • Contradictory input because of conflicts: Legal vs Engineering, Security vs Usability or Cost vs Reliability.
  • Humans add complexity due to self-interest, political views, emotional issues and personal failings.
  • Justifying decisions takes more effort than making them.
  • The process of conflict resolution is perceived as slow and cumbersome and holding the decsision maker back.
  • Complexity creates cognitive load and stress when exploring solutions as new conflicts can emerge.
  • The AI seems to resolve those problems:

  • The AI makes the complexity digestible by creating summaries and abstractions.
  • It can generate a plausible compromise language and can reframe contradictions as trade-offs.
  • It feels much easier discussing a contentious decision with an AI taking the opposing view instead of a personally invested human. The human complexity disappears.
  • The decision taken with AI assistance feels well-founded and defensible.
  • The alignment of interests feels faster (on the surface).
  • The solution exploration is done in an environment where no new conflict can spawn.
  • This is the siren song of AI for decision-makers.

    The pain of complexity is felt so acutely that these promises drive adoption, even when the limitations are already visible.

    And people rarely see their problems as personal. They see them as universal, and once they find a perceived solution, they feel compelled to apply it to everyone else.

    3/9

    LLM generality comes at a price

    LLMs are attractive because they appear to solve many problems at once. But they are not purpose-built for any of them.

    Their strength is generality. Their weakness is exactly the same.

    • They approximate
    • They generalize
    • They smooth over operational details with linguistics

    Which makes them useful for handling the appearance of complexity, but limited when precision actually matters. They can help navigate complexity, but they are not designed to resolve it.

    This limitation becomes more visible the closer you get to the operational level. Where precision matters and consequences are immediate, the trade-offs are harder to ignore.

    Pushing LLM-based solutions into these contexts does not just expose their limits. It risks eroding the credibility of both the tool and the manager advocating for it.

    Nothing damages trust faster than a solution that fails exactly where the problem is real.

    4/9

    Hidden costs of generative AI usage

    One of the more subtle problems in the current wave of AI adoption is that we are not seeing the real cost of using it.

    We are still deep in a hype phase where a significant part of the cost is not borne by users, but by investors. Pricing is shaped less by sustainable economics and more by market capture strategies.

    The result is a distorted cost-value perception: AI appears cheaper, more efficient, and more broadly applicable than it is likely to be under normal economic conditions.

    This has predictable consequences. Use cases that look compelling today may not hold up once pricing reflects actual costs. Efficiency gains are overestimated, trade-offs are underexplored, and decisions are made on assumptions that may not be repeatable. Early adoption feels like an obvious win, largely because the bill has not arrived yet.

    At the same time, the cost is not only financial. It is also physical. AI systems consume compute, memory, and energy at scale, and these resources are finite. This creates new and largely unmoderated conflicts over resource allocation. We are already seeing tensions between different domains competing for the same underlying infrastructure: AI workloads versus gaming, rising hardware costs driven by demand for accelerators and memory, and growing concerns about energy consumption and environmental impact.

    These conflicts are not yet governed in any meaningful way. They are neither transparently priced nor deliberately managed. Instead, allocation is driven by market pressure and amplified by hype. Costs are shifted outward to adjacent domains or deferred into the future.

    What currently presents itself as efficiency is therefore, at least in part, a combination of subsidy and externalization. AI does not eliminate the cost of complexity. It changes where that cost appears, and who is expected to pay for it.

    There is a second-order effect that is even harder to ignore. The scale of current investment into AI infrastructure and development is so large that even in an optimistic scenario, where the technology delivers on most of its promises, it is not obvious how a matching return on investment is to be achieved.

    The expectations embedded in these investments assume not just success, but sustained and extraordinary value extraction at scale. It is difficult to see how this could happen without a significant shift in the balance of power. Most people may not understand the underlying numbers, but they can sense the implications, and that creates a quiet sense of threat.

    That threat is amplified as AI ventures into areas that were deemed safe from mechanization.

    5/9

    On creativity and the “human exception”

    A common claim in the discussion around AI is that only humans can be truly creative. This claim usually rests on the assumption that human creativity contains something irreducible, sometimes framed as a kind of “spark” that cannot be replicated or mechanized.

    The difficulty is that this assumption is asserted more often than it is demonstrated. There is no clear model of what this supposed spark consists of, how it operates, or how one would even recognize its absence. It functions less as an explanation and more as a boundary marker: a way of separating human output from everything else.

    At the same time, the criticism directed at AI often focuses on the idea that it “merely recombines existing material” or “borrows” from prior work. This is presented as a disqualifying flaw.

    But human creativity has always operated in much the same way.

    Writers, artists, and thinkers continuously draw from a shared cultural corpus. Themes, structures, and motifs are reused, adapted, and reinterpreted across generations. Much of what is considered original is, on closer inspection, a transformation of existing material rather than creation from nothing.

    Homer is a useful reference point here. For centuries, authors have drawn on his narratives, structures, and archetypes. This is not considered theft, but tradition. It is framed as influence, homage, or participation in a canon.

    The difference, then, is not that humans create from nothing while AI copies. Both operate by building on what already exists. The distinction is one of framing and attribution, not of fundamental mechanism.

    What AI challenges is not the process of creativity itself, but the narrative we have constructed around it. At its core, the debate is about ownership and status, and people tend to defend both with remarkable intensity.

    6/9

    A brief aside on training data

    Reusing existing material is hardly unique to AI. Humans have been doing so for centuries, with broad cultural acceptance.

    The acquisition of training data, however, exposes a different kind of tension. Copyright has historically been enforced vigorously when humans are involved. In the case of AI, enforcement is uneven, and the boundaries are still being negotiated.

    At the same time, the ongoing hunt for training data creates incentives to extract content at scale, often without clear permission or compensation. The result is not just legal ambiguity, but a growing and largely unmanaged conflict.

    7/9

    Perceived threats and unresolved questions

    Taken together, the previous sections begin to explain why the current wave of AI adoption is accompanied by a diffuse but persistent sense of unease.

    Part of this unease is economic. There is uncertainty about the sustainability of current pricing, about who ultimately bears the cost, and about whether the promised returns can justify the scale of investment. These are not abstract concerns, even if they are often only felt rather than articulated.

    Part of it is structural. The expectation of large-scale value extraction implies shifts in power that are difficult to predict, but easy to sense. Control over infrastructure, models, and data is unlikely to remain evenly distributed.

    Part of it is operational. The limitations of general-purpose systems become visible at the point where precision matters most. When tools are pushed into contexts where their trade-offs are obvious, trust is not just placed at risk, but actively eroded.

    Part of it is social. AI changes how decisions are made, how responsibility is assigned, and how humans interact with each other. It introduces new forms of abstraction that reduce friction in the short term, while potentially weakening understanding and accountability over time.

    And part of it is legal and ethical. The acquisition of training data, the ownership of outputs, and the boundaries of acceptable use remain contested. These are not settled questions, but active fault lines.

    None of these issues have been explored here in depth. They deserve their own discussion.

    What matters for now is that these pressures do not remain abstract. They accumulate, and they are felt.

    When people feel overwhelmed and threatened, they do not respond with nuance. They respond with simplification, categorization, and force.

    8/9

    Summary

    Companies introducing new technologies have historically resisted regulation, or at least delayed it for as long as possible. AI is no exception. If anything, the pace and scale amplify this tendency.

    My assessment is that the absence of effective regulation, and more importantly the absence of mechanisms to moderate the conflicts AI creates, is the single greatest risk to its long-term adoption.

    Without such moderation, the tensions outlined earlier do not dissipate. They accumulate, become visible, and eventually dominate the perception of the technology.

    The real risk to AI is not technical failure, but unmanaged conflict. If people experience AI as a loss of agency, acceptance will not degrade gradually. It will collapse.

    There is also a more practical constraint that is often overlooked. The current discourse tends to treat AI as a single, general-purpose solution that can be applied equally well to very different problems: writing a poem, assisting with homework, or navigating the complex and often conflicting interests within an organization.

    There is little reason to expect that the same tool will excel across such fundamentally different domains. Systems that are good at generating language are not automatically capable of managing real-world conflict, accountability, and trade-offs in high-stakes environments.

    If AI is to fulfill its broader promises, increasing specialization is the more likely path. This, however, introduces additional layers of complexity: more systems, more interfaces, more integration points, and with them new failure modes and new conflicts.

    Which brings the argument full circle.

    9/9

    @masek An important aspect of my sense of unease regarding "AI" is that the people leading the most important companies and those advocating most for widespread adoption seem to be mostly psychopaths ("introspection is a modern concept", "my tool will let you put those pesky educated women to their place", to paraphrase just two of them). Their explicit goal is to destroy our pluralistic societies.
    The Shockwave Rider is prophetic. Brunner understood that surveillance capitalism was inevitable - the real question was whether we could build tools for escape before it was too late.
    @albert_inkman
    There are many kinds of things labelled AI and some are very useful. The LLM / Generative AI is a dead-end toy that can never be useful as you can't tell without being an expert & research if the output is plausible nonsense or "stolen" by the so called "Training". Even if it was reliable enough the LLM can never be cost effective and make a profit.
    @masek except 'AI' makes people stupid and is ruining the world.

    @LanceJZ Both are IMHO effects of what I describe, not inherent properties.

    The destruction is an effect of hiding the real costs of AI and complete lack of conflict moderation.

    The negative effect on intellectual capabilities is a direct result of using an unsuitable tool for the wrong task.

    I have a rough plan for another post in that series where I try to drill down on those aspects.

    Adults Lose Skills to AI. Children Never Build Them.

    Discussions of cognitive offloading often miss a critical distinction: What AI does to a 45-year-old's brain is categorically different from what it does to a 14-year-old's.

    Psychology Today