Once again Ted Chiang has it exactly right. The immediate danger from #AI is not that it will become sentient and do whatever it wants. The danger is that it will do what it’s being designed to do: help rich corporations destroy the working class in pursuit of ever-greater profits and thus concentrate wealth in fewer and fewer hands.

https://www.newyorker.com/science/annals-of-artificial-intelligence/will-ai-become-the-new-mckinsey

@JamesGleick
It's going to create mass violence like nobody's business. You can't take away people's identities without consequences.
@JamesGleick Chiang is unmatched in the current generation of SF writers and thinkers...thanks for the link
@JamesGleick if the working class is too poor to buy their shit, then there are no more profits

@JamesGleick

I assume that Google, Facebook and MicroSoft accidentally have killed themselves. Since LlaMa was leaked, the cost of creating a LLM has fallen to 100$ (as described in a document leaked from Google). Now there are 8Billion potential competitors of Google, Facebook and MicroSoft (provided they keep their business strategy WRT LLMs).

#dotComStartupsFinallyJumpedTheShark

@Life_is @JamesGleick the code to train the model was always open source. What leaked was the weights, which allow using their exact trained model, but do little to help with training new variants. The code for setting up the structure of the model and training it is what really changes the cost of a new model, because at that point the job is data set assembly rather than R&D

@danielleigh @JamesGleick

It was open source in principle, but not available until it was leaked (and the leak was legal, because it was open source).

Whatever the details: It is as if someone invented the automobile, patented it and the patent expired unexpectedly after a month.

@Life_is @JamesGleick the code was released in February, and the weights weren't leaked until March. The training dataset remains private to Facebook and the weights are simply their trained version of the model. They intentionally released the code when they announced the model. This is the part that makes it easier for others to train their own models, as the hard part of building neural nets is figuring out the architecture and training strategy.
@JamesGleick More to the point, it'll do exactly what it was trained to do rather than what the developers thought it would do.
@JamesGleick 100% correct. AI seems like magic. It's important we look behind the curtain.
@JamesGleick
It can also flood the "information space" with utter nonsense, complete with confirming links, making it hard to actually look anything up and find an answer you can trust.
@JamesGleick would have to agree with this conclusion

@JamesGleick
I'm an innovationist, but not economically nor capitalistically.

The Luddites would be up in arms today if they saw this.

@JamesGleick

I'm only in favor of the AI efforts that disassemble and eliminate corporate and billionaire wealth and power. What, you say there are none?

@JamesGleick Yes. Anything that allows companies to reap profits without employing people will do that.

Employing people will simply be " too expensive". (They'll still want you to buy their products though.)

@JamesGleick I'll admit to a little anticipative schadenfreude at the prospect of McKinsey being driven out of business by AI.

@JamesGleick What they want is to bring back feudalism. They want a society of leaders, donors, and serfs. The serfs need to be unhealthy, uneducated, religious, and the women pregnant. And the worse the planet gets, the more they will need a labor supply to keep their climate controlled habitats in working order. It's dystopian, but is there someone out there that doesn't believe the republicans are dystopian? And the Saudis have already started construction

https://www.npr.org/2022/07/26/1113670047/saudi-arabia-new-city-the-mirror-line-desert

@JamesGleick Wernher von Braun is credited with saying something like "I simply design the knife, and someone else decides if they will use it for surgery or murder" he lead the early German military rocket programs that produced the V-2's and would be taken over by the americans at the end of the war and subsequently developed into ballistic missiles. I don't know what the future of ai will hold, but the implementation of it will likely matter far more than it's technical capabilities, I mean just look at today's various learning algorithms used for everything from advanced scientific research at the better end to like facebook ath the worse end
@JamesGleick
Naive question.
Is it possible for AI systems to cycle themselves into a downward information quality spiral as they consume their own output in a refresh training cycle?
@JamesGleick It's truly ironic that everything dystopian sci fi predicted when I was a kid is happening now, exactly as described, and nobody cares. It's an example of how we ask permission to care about things, and carefully ignore those that our cultural authorities tell us are impolite to care about.
@RustyRing @JamesGleick I don't think it's nobody caring rather than nobody knowing what to do, and setting stuff on fire is not most people's preferred alternative.
@JamesGleick Sorry, Ted Chiang has it completely wrong. It would be wonderful if AI led to a huge surge in productivity -- we need not worry about inflation for many decades -- but little reason to believe that will be the case. But,l this is the sort of stuff that excites New Yorker readers even if it has no basis in reality.
@DeanBaker13
But isn't the whole goal of this hypothetical surge in productivity that the owners will no longer have to pay any workers?
@JamesGleick
@BrentInMasto @JamesGleick sure, capitalists ALWAYS want to pay their workers as little as possible. That is a given. The question is whether AI is some huge qualitative breakthrough, which will hugely increase productivity growth. I have been hearing this claim literally for decades, and we have not seen it yet. Maybe the techno-optimists will be right this time, but they have a hell of a track record of being wrong.
@BrentInMasto @JamesGleick I'll also add that our period of most rapid productivity growth was 1947-73, which was a period of rapid real wage growth and declining inequality.
@BrentInMasto @JamesGleick There are also tons of things we can easily (logically, not politically) do to affect who benefits from technology, like weakening patent/copyright monopolies. Unfortunately, people who control outlets like the New Yorker and other major media outlets, don't like to see such ideas get attention.
@DeanBaker13
Someone earlier in this thread mentioned that one of the AI LLMs leaked and has gone open source. It is now growing faster than any of the VC backed projects. Here's something on the subject I stumbled across earlier today...
https://www.semianalysis.com/p/google-we-have-no-moat-and-neither
@JamesGleick
Google "We Have No Moat, And Neither Does OpenAI"

Leaked Internal Google Document Claims Open Source AI Will Outcompete Google and OpenAI

SemiAnalysis
@BrentInMasto @JamesGleick very cool -- if everyone can get the latest AI, it will be harder for Google, Microsoft and the rest for make big bucks from it.
@DeanBaker13
Ah, that great post war expansion, strong unions and the GI bill growing the middle class and when CEOs still had a sense of obligation to society. Plenty of other problems then, just as now, but at least the working people were getting ahead.
P.S. - One might also add that the way that growth happened did a lot to bring us to the current climate cusp.
@JamesGleick
@DeanBaker13
Ok, that makes sense. Thanks for the quick reply!
I'm with you on it being pie in the sky at this point. I'd expect fusion power to be commercially viable before we have true AI (which will happen just before Teslas are truly self driving :P )
@JamesGleick
@JamesGleick You’re just NOW realizing that owners are going to automate everything they can?
Asimov, Vonnegut, & Wendy’s: I For One Welcome Our Robot Overlords

Culture War Reporters
@JamesGleick I don't think there's any danger or risk ... like all technological advances this is exactly what will happen - increase the rich-poor gap, and redistribute wealth., The only question is the degree to which it will happen, and AI looks like a biggie
@JamesGleick computers have become intelligent, before they becomes sentient.
Intelligence is a emergent property, of some biological systems. ChatGPT and other system are not intelligent. They are not even close to being intelligent.

@JamesGleick

Exactly so.

It is not AI but data sorting IT. Whoever controls the sorting hat, controls Hogworts.

The inevitable danger is not only corporations, state interests or other agenda pushers but also the next generation of freelancers, funded by unfindable finances. Dark money interests for example or ideological cults.

Fortunately those not needing or using technology will not be blinded by nerd and herd 'necessity' …

🏴‍☠️🏴🏳️📵

@JamesGleick I think this was a fear about every new technology, everything that increased production per unit of human effort.
@JamesGleick “Capitalism is the machine that will do whatever it takes to prevent us from turning it off, and the most successful weapon in its arsenal has been its campaign to prevent us from considering any alternatives.” In just a little while this definition will also be true for AI.

@kentbrew @JamesGleick The corporation is the original AI, executing on brains, memoranda and spreadsheets.

Now with ever-increasing automation moving decisions from brains to silicon, corporations can execute millions of times faster, with thousands of times higher complexity and a fraction of the human controls and oversight. That's the regulatory challenge and it's already here since decades back, regardless if one labels it "AI" or not.

@JamesGleick And the problem with that way of thinking is: if there are less people who are able to buy stuff, the profits of big corporations will become very small. The secret behind todays relative wealth is the spread of wealth over a big number of people. But it’s a the question if those big greedy corporations can remind themselves of that…
@JamesGleick
Wobei man durchaus beide Risiken bedenken sollte. Eine Killer-AI ist mE ein eher kleines, aber wenn es eintreten könnte durchaus wichtiges. Derzeit aber wohl vor allem eine Tech-Bro-Verdrängsungsleistung gegenüber realistischeren Untergangsszenarien (Klima usw).
@JamesGleick I hope politicians are going to consider some serious legislation on AI. For safety it should be subject to same laws as humans.

@Threearrows78 @JamesGleick It is not a person. Treating the current machine as a person will be the same mistake or a bigger mistake than treating the corporation as a person.

The problem with both is liability. Put more liability and actual consequences on a natural person behind the legal person and you'll see interesting shifts in behavior.

Interesting metaphor in the New Yorker article by Ted Chiang:
'AI is not so much a genie that you can ask many questions, but should be seen as a management-consulting firm, along the lines of McKinsey & Company.' With all the consequences of that.

Thanks for the post.
https://www.newyorker.com/science/annals-of-artificial-intelligence/will-ai-become-the-new-mckinsey

@JamesGleick

@JamesGleick To add to your point, #AI physically CAN'T become sentient, because AI's meant to *imitate* humans.
@JamesGleick Certainly the definition of "machine learning" has been dumbed down all the way to linear programming.

@JamesGleick

This is why we should all be Luddites. It’s not about hating or fearing technology. It’s about resisting the use of technology to concentrate power and wealth in fewer hands at our expense.

@JamesGleick
China will do it, anyway and not cooperate with the rest of the world; Russia will do the same. Single purpose AI, control and expansion, is just what they want.
@JamesGleick as tech always does. It might be refreshing and helpful for it to wrest control from our masters.
@JamesGleick a somewhat orthogonal question, but connected in its motivation: why should we not apply the same ethical protocols that we do to animal & human research to 'large' AI research?

@JamesGleick with the recent advances in #opensource #training #techniques, AI will be available to everyone. And if I know human nature at all it will be first applied by malicious actors, only one of them being capital as specified here.

The steps taken by society to mitigate should be #verification of humans and verification of accurate reporting.

AI for every kind of social and political mischief imaginable will be available by 2025.

Governments need to act, stop looking at business.

@gimulnautti @JamesGleick @ellent

Verificatie of je een mens bent gaat vast weer heel veel business voor #biometrie verkopers opleveren

@JamesGleick he's so good at cutting through the noise and finding the heart of the problem.

The comparison to "capital's willing executioner" was particularly apt.