So there’s papers/studies that show:

1. LLMs don’t work (high error rate and making stuff up)
2. Using LLMs reduces your productivity
3. LLMs cannot—ever—be “scaled” to achieve human-level intelligence
4. Most people who speculate in financial bubbles lose their investment

Any questions?

@thomasfuchs no question as such. only this pressing feeling, that people - just like with climate change - prefer to believe in magic rather than trusting facts.
@thomasfuchs
5. LLM's are made available by someone and that someone has an agenda

@pellechristensen @thomasfuchs

Users, as it seems that's the only metric, are the product.

@thomasfuchs the only question now is when. When does it burst?
@f4grx @thomasfuchs When they runs out of VC money. Cracks are already showing with the recent Cursor's pricing scandal. The cost of running "reasoning" models used by AI agents is on a logarithmic scale. A simple prompt can generate thousands even million of tokens with the back and forth required to load files into the context window, to execute and interpret MCP requests, to fix rookie mistakes generated by the model, etc...
@f4grx @thomasfuchs when the money powering the blitzscaling bubble dries up.
@thomasfuchs How long, oh Lord ... ?
@thomasfuchs Agree. To add weight to this toot, do you have sources for each points?
@thomasfuchs Yes. Can you add all the papers alarming about the environmental impacts too please?
@thomasfuchs Just one, and it's the big one: Why? (Do we use them)
@thomasfuchs
Yes. How do we, as a society, minimize the harms caused by people experiencing pareidolia when using LLMs and who wind up believing it's a person, especially the ones who cultivate a romantic relationship with them, and the ones who wind up believing it is a religious oracle telling them they have secret powers or are the second coming and whatnot?
@thomasfuchs yes, do you have any of those funky tulip bulbs going cheap?

@thomasfuchs
Yes, how can we get people to read these papers, so this stuff stops happening?

https://unu.edu/publication/does-united-nations-need-agents

This working paper is one perfect horror show of a non-tech guy discussing the ethics of #AI, while missing that he misunderstood some fundamental parts of #llm. While reading the paper I even got the impression that he didn't even bother to read the papers he was quoting to make his claims about AIs potential uses. It's a very depressing read.

Does the United Nations need agents?

Testing the role of AI agent generated personas in humanitarian action.

United Nations University
@thomasfuchs but the tech-bros *really really* want this to work! It's the only thing they want for christmas!
@thomasfuchs Have you seen this? It might not be a good idea to hook up stuff to LLMs and let it run with it. But a bit alarmist, envisioning a Terminator scenario...
https://www.spiegel.de/wissenschaft/technik/kuenstliche-intelligenz-ex-openai-mitarbeiter-ueber-die-bedrohung-durch-schlaue-maschinen-a-8ec01cbb-6fd8-426f-badc-73a90ab93791?giftToken=e2720bf7-6870-4e11-b1b2-fb630593e3ef
Gefahren künstlicher Intelligenz: »Sobald keine Täuschung mehr nötig ist, löscht sie die Menschheit aus«

Der amerikanische Forscher und ehemalige OpenAI-Mitarbeiter Daniel Kokotajlo erklärt, warum künstliche Intelligenz bald jede menschliche Tätigkeit übernehmen und sich dann gegen ihre Schöpfer wenden könnte.

DER SPIEGEL
@carstenfranke AI can’t think, therefore the “skynet” scenario is bullshit
@thomasfuchs agreed, it gives probable answers only. But if people think it can think and connect stuff to it, the outcomes still can be dangerous. Personally I am eliminating all LLMs from my life, but have relatives that fully embrace them.

@thomasfuchs

Yep. #3 was always the carrot for those with some kind of weird utopian fantasy.

Computer scientists who humbly use computers to solve more pressing problems simply don't attract the same media attention, or investors, who are both looking for an emotional fix more than a technical one.