I admit that, through 2025, I have become an #AI #doomer. Not that I believe the LLM-becoming-sentient-and-killing-humanity self-serving hype bullshit for a second. There are much more concrete impact points that may lead to or greatly accelerate several crises, each of which will actively harm humanity as a whole, and particular groups of people in particular:

1. The massive amounts of (dirty) electricity, water, and raw material being wasted on GenAI **will** accelerate the climate crisis, both because of the direct pollution through building new data centers, training, and running the models as well as distracting from other efforts.
"AI" will not solve the **climate crisis**. People applying it make it worse.

2. Misinformation (aka., lies) is being produced at a scale never seen before. Our liberal democracies are not prepared to deal with that, and we have already seen increasing distrust of science, journalism, and the concept of political compromise. GenAI is fantastic for generating emotionalizing, polarizing, targeted bullshit. It - unsurprisingly - remains terrible at outputting balanced fact and actual novel insight.
"AI" (in the form of LLMs) will not help educate the masses to make better decisions. People applying it exacerbate the **political crisis**.

3. What started in Gaza will continue in other regions. Idiotic war hawks will inevitably connect the output of "AI" to target selection and direct forms of physical violence - all in the name of "efficiency" in the business of killing people.
"AI" will not protect our soldiers. Military using it simply kill more people, more quickly.

There are actually wonderful use cases for the diverse set of methods currently summarized under the "AI" umbrella, including for scientific discovery. But the current hype around LLMs leaves me with quite a pessimistic outlook. We really, really need to get past the hype and discuss the good use cases rationally and objectively, while stopping to waste insane amounts of resources on those applications that bring much more harm than benefit.

@rene_mobile It seems like a very defeatist attitude. We should focus on counteracting those bad uses, not on banning the technology.

@mihai I didn't mean to indicate that I would advocate outright banning it. I am not. The ML models are not intrinsically the problem. How we use them, however, often is - and that is driven by the hype train.

My main point is: the "AI" **hype** needs to stop so that we can have this rational discussion of what the current generation of models are good for and how to mitigate harm. In the current phase of the hype cycle, that debate seemingly can't gain traction. We need to move to the next phase, and quickly, before the various crises become much worse.

@rene_mobile Trying to stop AI hype seems like a hopeless exercise. And frankly I do not see how stopping AI hype would help stop bad uses of GenAI.

The assumption I'm moving forward with is that bad people will use AI for bad purposes for sure, regardless of how much or little of the AI hype is realized. Thus we need to come up with ways to stop or at least mitigate those bad outcomes.

@mihai
But the hype also drives massive silly/stupid usage, and that causes immense electricity, water, and materials consumption. We are already in the climate crisis and simply can't afford that. Blockchain cryptocurrencies drove massive energy demand, and now GenAI is doing the same. We don't have those resources to spare, and so it's ethically wrong to continue the hype until we solve the resource problem.

I'm with you that bad usage will happen in any case, but unfortunately I don't have any good ideas on how to counteract either the erosion of trust in public institutions or the use of AI to wage more and bloodier wars. C2PA is great to have, but doesn't help with the flood of textual misinformation that is destroying trust in fact finding. And the military answer to the other side employing technology has always been to do more of the same oneself...

We should of course still try to mitigate these bad uses, I just really don't know how to do that at this point without effective international regulation. Market pressure drives the hype funded use into all those bad cases.

@rene_mobile Economics has lots of solutions to deal with externalities without assigning moral value to a transaction. So one can imagine a system of taxing the resource usage of (and thus increasing the costs of and reducing the demand for) LLM inference, regardless of whether the LLM inference is silly or stupid. Note that resource usage is a completely orthogonal problem to that of malicious LLM applications.

@mihai Oh, I am not talking about morality, but ethics. But I think we are in agreement on most of the detailed points (if not the outlook). Yes, externalities (often used as a euphemism for real harm brought onto others) could be addressed by taxing accordingly and therefore putting a proper price on some use that can help to mitigate the resulting harm through economic means (by discouraging silly harmful use through cost, and by using that money flow to directly countaract the harm). We just don't do that at the moment. What do you suggest on how to use economics to mitigate the insane resource usage we are heading towards, without assuming effective regulation?

And yes, those issues are orthogonal (which is why I list them as distinct points). That means they all need to be addressed through different means if you look at them individually. But taken together as symptoms of the same hype, it strongly points towards the underlying reason...