RE: https://masto.ai/@discoverflux/116171948103936275
This was so much fun! Quite the ramble. Thanks, @mattsheffield for having me on!
Computationalist transhumanism is scientific Trumpism.
RE: https://masto.ai/@discoverflux/116171948103936275
This was so much fun! Quite the ramble. Thanks, @mattsheffield for having me on!
Computationalist transhumanism is scientific Trumpism.
Due to recent events, it's more important than ever to realize that AI "agents" are not, and won't ever be, proper agents:
https://arxiv.org/abs/2307.07515
#AI is #AlgorithmicMimicry
#OpenClaw #Moltbook are #AlgorithmicMimicry on #Steroids

What is the prospect of developing artificial general intelligence (AGI)? I investigate this question by systematically comparing living and algorithmic systems, with a special focus on the notion of "agency." There are three fundamental differences to consider: (1) Living systems are autopoietic, that is, self-manufacturing, and therefore able to set their own intrinsic goals, while algorithms exist in a computational environment with target functions that are both provided by an external agent. (2) Living systems are embodied in the sense that there is no separation between their symbolic and physical aspects, while algorithms run on computational architectures that maximally isolate software from hardware. (3) Living systems experience a large world, in which most problems are ill-defined (and not all definable), while algorithms exist in a small world, in which all problems are well-defined. These three differences imply that living and algorithmic systems have very different capabilities and limitations. In particular, it is extremely unlikely that true AGI (beyond mere mimicry) can be developed in the current algorithmic framework of AI research. Consequently, discussions about the proper development and deployment of algorithmic tools should be shaped around the dangers and opportunities of current narrow AI, not the extremely unlikely prospect of the emergence of true agency in artificial systems.
#OpenClaw and #Moltbook are not "the first step towards the singularity" unless that singularity involves us all drowning in nonsense and asocial behavior: https://theshamblog.com/an-ai-agent-published-a-hit-piece-on-me.
Any sane and sustainable society would legislate malicious fake personalities out of existence. With draconic measures.
Shows you just how far we are from a sane and sustainable society.
#AI is #AlgorithmicMimicry on #steroids now...
I didn't expect #WillIAm (of #BlackEyedPeas fame) to have one of the best takes on AI vs. human creativity. But here we are:
https://youtu.be/vWQrC-lG8FQ&t=546
"imagination regurgitation" and a total lack of judgment...

RE: https://mastodon.social/@h4ckernews/115938865404119323
No shit, Sherlock. And good luck!
And, most important of all: generating conscious AI would be a really, really stupid and irresponsible thing to do.
Where I still disagree: #AlgorithmicMimicry is also *not* real intelligence. True intelligence not only solves, but *frames* problems.
But that's a minor quibble.
This is going in exactly the right direction...
But it misses one additional and fundamentally important point: true agency, judgment, creativity, and imagination are only possible if you are a self-manufacturing living system that has to invest physical work into your own continued existence; you can't get these things in an algorithmic framework:
https://arxiv.org/abs/2307.07515
https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2024.1362658/full

What is the prospect of developing artificial general intelligence (AGI)? I investigate this question by systematically comparing living and algorithmic systems, with a special focus on the notion of "agency." There are three fundamental differences to consider: (1) Living systems are autopoietic, that is, self-manufacturing, and therefore able to set their own intrinsic goals, while algorithms exist in a computational environment with target functions that are both provided by an external agent. (2) Living systems are embodied in the sense that there is no separation between their symbolic and physical aspects, while algorithms run on computational architectures that maximally isolate software from hardware. (3) Living systems experience a large world, in which most problems are ill-defined (and not all definable), while algorithms exist in a small world, in which all problems are well-defined. These three differences imply that living and algorithmic systems have very different capabilities and limitations. In particular, it is extremely unlikely that true AGI (beyond mere mimicry) can be developed in the current algorithmic framework of AI research. Consequently, discussions about the proper development and deployment of algorithmic tools should be shaped around the dangers and opportunities of current narrow AI, not the extremely unlikely prospect of the emergence of true agency in artificial systems.
As I tend to say: you can create a perfect simulation of your bicycle, just by thinking systematically, but you still can't ride that simulation to work.
I'm amazed just how many otherwise intelligent people don't seem to understand this simple fact…
Every time you say your chatbot
"thinks, reasons, means, understands, creates"
a little kitten dies.
And every time you say
"AI agent"
a whole species of cute furry animals goes extinct.
SO. JUST. DON'T!
Algorithms have no agency,
don't think, don't understand,
don't generate meaning,
are not creative.
And they never will.
#AI is #AlgorithmicMimicry
I explain it here: https://arxiv.org/abs/2307.07515
and in the UNDP2025 Report (see pictures below).
(Full report: https://hdr.undp.org/content/human-development-report-2025)
Today, the 2025 #UNDP Report was released, with a little spotlight by yours truly on "Humans have agency, algorithms do not" (pp. 36-38):
The 2025 Human Development Report explores the implications of artificial intelligence for human development and the choices we can make to ensure that it enhances human capabilities. Rather than attempting to predict the future, the report argues that we must shape it—by making bold decisions so that AI augments what people can do.