Don’t Take the Black (Mirror) Pill | Weekstarter 47-2025
https://ahmetasabanci.com/dont-take-the-black-mirror-pill-weekstarter-47-2025/
Don’t Take the Black (Mirror) Pill | Weekstarter 47-2025
https://ahmetasabanci.com/dont-take-the-black-mirror-pill-weekstarter-47-2025/
I've been following the story of the "insect farming" promise for years, on and off. It's the basis of the famous conspiracy minded carnist. The thing is that it hasn't panned out, as expected by those who understand the physics and biology. There's no free energy machine. And it is also animal farming.
(continues...)
#insectFarming #entomology #BigMeat #agriculture #foodSecurity #technohopium #hype #eatingBugs
RE: https://mas.to/@carnage4life/115554320963141600
“We don’t have a customer that used Milestone and said, ‘Okay, GenAI doesn’t help me, I’m going to revoke all my licenses.’ It’s actually the opposite. They want to try more Gen AI tools.”
In other words: Buy our thermostat; it’s so accurate that nobody has ever looked at it and wanted less air conditioning! #AI #hype
#Quantencomputer: #Revolution oder bloßer #Hype? Ein kleiner Blick in eine mögliche Zukunft des #Rechnens.
Themenreihe: #Naturwissenschaft und #Technik
Referent: Prof. Dr. Dr. Thomas Lippert, Modulares #Supercomputing und #Quantencomputing, #Goethe_Universität #Frankfurt.

The AI hype is clearly harmful
Today, I think we’d do well by distancing ourselves from the AI hype. The slop is real, and it’s now obvious that it has created massive problems, and is likely to continue to do so. I don’t want to associate myself with something this damaging. Do you?
Shallow ethics
The issues are many.
For one, AI is fundamentally hostile against anyone contributing to the digital commons, by the fact that AI companies are massively freeloading on published source code, articles, images and any other creative content, without any thought of license constraints or contributing back. If AI companies, for example, were funding the open source ecosystems they are DOS’ing, or paying the artist they are copying, the situation might be marginally better. They’re not. This means the training material these companies are misappropriating should lead to their models being considered ethically tainted.
Next, we have to remember that these models are “grown” on whatever data is fed to them. If this input contains bias, lies inaccuracies or omissions, then the resulting model will reflect this. Garbage in, garbage out.
And even worse, the resulting model is opaque by design. Any rules, corrections, filters or other efforts to compensate for “weaknesses”, are under the full control of the entity growing the model. This puts a massive amount of leverage into their hands; they can color, censor or emphasize any political, social, cultural or even religious agenda they wish! The only choice we have is to accept the models as they are delivered, or to try to polish these turds so that they are a little better for some narrow use-cases. But at the core, it is still a turd.
Lock-in economics
And then there’s the economic aspect. Let’s keep in mind that there are massive investments in AI companies (in the order of 100’s of Bln USD announced), all of which is expected to turn a profit at some point.
We know how expensive it is to train a model, and how error prone it’s inferred output is, and even if some of this can be compensated by massively increasing the energy costs (e.g. by “agentifying” their products or adding manual rules to catch the worst output), these expenses WILL ultimately be put on the end-user. This is where the Return on Investment is extracted.
How this happens is not a secret:
The problem with this picture is that we know that the initial investments have been insane, and are scheduled to increase. We know that enormous amounts of the costs of these models have been externalized.
(e.g. in the form or excessive use of water and fossil fuels needed to power these systems, or the societal damage that happens when people are replaced by LLMs, or the opportunity cost IT students pay when they realize they won’t find any work in the field they studied for, or the lost business for artists, or wasted time spent on compensating or second-guessing output that one is unsure includes any hallucinations).
We also know who is going to pay in the end – the users and businesses who decide to go “all-in”. At some point, these people will have to ask themselves:
How much am I – or my customers – willing to pay for this slop?
– Random Hapless Rube
Hostile rhetorics
AI proponents also tend to use cheap rhetoric to convince other to buy into their message. Why is that necessary? Pushing a panic-like FOMO messages onto unsuspecting techno-optimists is cruel and unnecessary. There’s no need for manipulative language like “Embrace it or get out“. People with good intentions don’t have to resort to hostile language like this!
The AI hype is clearly cruel, irrational and ignorant of the real consequences it creates, and therefore needs to be shut down, or at minimum, put AI on pause.
This particular lemon is NOT worth the squeeze.
If we continue to encourage this insanity, we’re complicit in the waste of resources, attention, life and humanity. THIS IS NOT OKAY.
Favorite resources
Here are some of the resources I’ve used to learn more.
Podcasts
Long form audio or books
Fun & Animation
Further resources
#ThomasKnüwer in #IndiskretionEhrensache:
#SamAltman vor Gericht und was wir noch aus der #DotcomBlase über den #AI #Hype lernen können
„Die Sprachmodelle sind nicht mehr maßgeblich ausbaufähig. Beweis: ChatGPT 5.0 war kein großer Sprung mehr, viele konnten keine Fortschritte erkennen, etliche beklagten, das neue Modell sei sogar schlechter als sein Vorgänger.“
#KI #KünstlicheIntelligenz #Technologie #WirtschaftsstandortDeutschland #OpenAI #LLM
https://www.indiskretionehrensache.de/2025/11/ai-blase-bubble/