It was a godless sound; one of those low-keyed, insidious outrages of Nature which are not meant to be. To call it a dull wail, a doom-dragged whine, or a hopeless howl of chorused anguish and stricken flesh without mind would be to miss its most quintessential loathsomeness and soul-sickening overtones.

Howard Phillips Lovecraft has often been accused of merely heaping together adjectives and adverbs in a vain effort to describe horrors as ultimately "indescribable". #Lovecraft himself had a low opinion of his work and thus he's full of praise for other writers who evoke the kind of weirdness he wanted, that he could feel keenly and strive to depict himself...but with such inadequacy.

Hence, in his defence, I would like to point out the difficulty of the task he's facing in this one short passage: to describe the sounds that might be made by a creature that is not an animal, nothing familiar, but a concrescence of unprecedented human magic or science—the distinction matters almost nothing here—that is indubitably alive but not by any means known to human experience.

A malformed, imperfect de novo creation, one of Joseph Curwen's less successful experiences in restoring life to the "essential saltes" of dead beings perhaps...trapped in a pit somewhere in the dark, existing only because its creator hadn't yet found a good enough pretext for destroying it.

What noises would it make, in its solitude? Howard tries his best to relate this.

What would such a creature sound like, in its pain, and how would a human being react? I think Lovecraft does a reasonable job under the circumstances. "Doom-dragged" is clumsy but...so are Curwen's creations, most of them.

I bring this up because the creation of life is on my mind. Frankenstein is always a provocative topic for debate nowadays, and I'm old enough to consider that somewhat remarkable. It was a rare treat back in the late 1980s or early 1990s to get a blockbuster Hollywood treatment of Frankenstein.

Now that mass entertainment has embraced geeky genres with gusto, a Frankenstein film is practically mainstream fare. Guillermo del Toro's new adaptation (which I haven't seen) is hotly debated and I wouldn't be the least bit surprised to learn that it provoked some other big-name filmmaker to put their own spin on the tale.

The "Modern Prometheus" still excites our fascination and draws our attention. Is there a real-life Victor Frankenstein somewhere? It's known that a great many rich people believe that creating life and reaching for immortality is a most important goal for #science and technology, deserving top priority and no sparing of expenses—or moral scruples.

(cont'd)

Why don't the #AI / #LLM boosters talk about creating #life?

It's a curious business. You'd think some cleverpants writer for the @newscientist or @SciAm would write an editorial about this specific perplex: the #software entrepreneurs have been chattering about #intelligence in extraordinary terms but in a way that's peculiarly dissociated from the question of life.

All intelligent beings known are alive. Crows, foxes, cephalopods, monitor lizards, and the occasional human being are known to exhibit the ability to solve complex problems, remember things outside their immediate perception, learn from experience...it's living things which furnish us with our wisdom about the very concept of intelligence.

So what is #artificial_intelligence?

(cont'd)

It should worry the writers at @newscientist and @SciAm and @ArsTechnica more than it seems to, that nearly ALL of the people they regard as authoritative on the subject of #artificial_intelligence or #AI, who furnish these press outlets with material for reports and editorials, are committed to selling as "AI" the thing known generally as the #LLM or "large language model", which is notably defective in its ability to reason by abstraction.

These devices are tailored to emit statistically likely replies to sentences. So far as I know, they are not designed to analyze language and discern the abstract meanings behind the words in any passage of text. LLMs frequently emit errors which are symptomatic of that lack of abstract understanding of language.

(cont'd)

And thus, occasionally, are there stories about #LLM or "generative #AI" accidentally recommending poisonous mushrooms to people, or outputting nonsensical recipes, or inventing authors and citations out of thin air: without any power to think in terms of abstractions and abstract meanings (such as, say, the abstract difference between an edible mushroom and a poisonous one, or the difference between the name of a genuine academic source and one that's confabulated) the LLMs aren't able to correct their own output. They have no internal standard for the meaning of things.

The devices simply emit likely word choices. To the limited and incurious #software corporate minds who lashed up the LLM into a massive economic and government push to spend the world's resources on "artificial intelligence", more "creative thinking" from the LLM simply means a wider range of word choices.

(cont'd)

The inability of #LLMs to think abstractly about the text it's processing or generating ought to have been, from the very start of all debate about "artificial intelligence" and its place in public life, at the forefront of all #journalism and critical writing about the promises of the high-tech sector. Sam Altman and Sundar Pichai and all the other tech execs have been harping on the superior #intelligence of their devices compared to human beings.

Indeed the immediacy of their demands (for more money, better press coverage, favorable legislation, etc.) is fuelled by the general technology culture assumption that human beings are failing to solve their own problems. Humanity is overwhelmed by a world so complex, only a few high-powered minds can understand it—so if we don't embrace the superior machine minds, we're sunk! So say Altman, Andreessen, #Elon Musk, etc.

(cont'd)

And yet the #LLM machines keep making stupid and dangerous mistakes which betray their poor capacity to reason. They don't know how to generalize and think in general terms, so they output trash without realizing it. They can't tell the difference...and it's not in fact very clear that the people who stand to reap the most profits from "generative #AI" actually care.

For LLMs are extremely useful to them as stupid stochastic parrots. That's clear enough: one of the most obvious uses for LLM agents is the distribution of propaganda on the Internet. A good fraction of the apparent userbase of Elon Musk's Twitter is composed of bot accounts, trained to emit text fitting some desired ideological profile imposed by training up an LLM. Similar "bots" are no doubt deployed in vast swarms for #marketing and #advertising purposes, for it's already normal in Western society to fill all channels of mass media with mechanically generated junk. LLMs are but a further elaboration of this vast industry.

(cont'd)

Hence it's quite possible to argue that the intended purpose of the #LLM and "generative #AI" is not to be intelligent or creative at all, but predictable and repetitive. George Orwell in 1984 describes a machine (possibly inspired by wartime cryptographic computers) which he dubs the "kaleidoscope" which assembles cheap entertainments for the masses ("prolefeed") by an automated process of assembly stochastically chosen pieces. #ChatGPT and other "generative AI" devices seem to be intended for a quite similar purpose: the automatic production of the absolute lowest grade #content that's still minimally salable.

After all, with enough social pressure and government assistance, it's possible for a corporate to sell anything, no matter how trashy it is.

(cont'd)

Hence with "generative #AI" being so conspicuously sloppy at thinking, and visibly being used for purposes which are quite obvious NOT meant to exploit their purported genius but rather their sheer bulk and volume of output, it's much to be wondered why the Fourth Estate at @Gizmodo and @PopularScience etc. aren't questioning the intelligence of "artificial intelligence".

Why on Earth are journalists, ANY journalists covering #technology, still taking Sam Altman or Jack Dorsey or any other tech executive seriously when they burble something about the runaway superintelligence of "AI" technology?

(cont'd)

And that brings me back at last to the question of #life, because although the writera at @Gizmodo and @PopularScience et alii may have forgotten this basic fact about #intelligence, it's still a basic fact: up to now, intelligence means life. It's living things which exhibit intelligence, and therefore if there's some reason why Sam Altman or Elon Musk or any other exponents of "generative #AI" aren't talking about whether they think they're creating life or not, it's the duty of journalists everywhere to tackle the mystery.

Do the AI boosters in the #technology sector think they're dabbling with the creation of living beings, synthetic novel lifeforms? If they don't think that's what they're doing, then how do they account for intelligence without life?

(cont'd)

Now I myself suspect they simply don't want the issue discussed at all because of the explosion of ethical consequences that stems from being responsible for creating a new lifeform AND shackling that lifeform to a corporate money-squeeze.

It's cleaner and more bloodless to think about corporate exploitation of a miraculously disembodied and mechanical sort of "intelligence" than to reflect upon the implications of making a superior lifeform (for they do insist upon the superiority, if nothing else) do zillions of repetitive tasks for which a fairly limited subset of #tech professionals take the public credit, and reap the secular rewards.

~Chara of Pnictogen