@markhurst @randomwalker @sayashk Even if you look at it from the perspective of automating all jobs, the concentration of power is still the most pressing issue.
Consider who will own the machines and software to do this: Existing megacorporations. Every one of them is a for-profit corporation, designed specifically to sit between workers and customers (who belong to the same pool of human beings), taking a portion of all money that changes hands on behalf of a third party that contributes nothing beyond the initial investment of capital.
With full job automation, there are no workers being paid, so there is no money going to customers. So after a brief period of customers making purchases with savings, all money will end up in the hands of corporate shareholders. It's literally cutting a huge fraction of the population right out of the path of the flow of money.
@markhurst @randomwalker @sayashk Every automated job acts as an amplifier for the rate of wealth concentration in our economy. It's not even necessary to automate all jobs.
With each wave of automation, there is temporary technological unemployment. Waves of automation are coming faster with each cycle. Eventually those waves will come fast enough that the job market cannot adjust fast enough, and technological unemployment will build faster than it can be eliminated. So unemployment will continue to rise until it reaches an untenable level, even if you are of the school of thought that new jobs can always be created to replace old ones.
And once there aren't enough jobs to go around, people will be cut out of the money flow and left to go hungry/homeless. Even if it's not a majority of people who suffer this fate, it's still going to be too many.
@hosford42 @markhurst @randomwalker @sayashk
Instead of doom and gloom, let’s plan for this future:
@randomwalker @sayashk the problem with AI is that it amplifies the structural inequities of capitalism to an unsustainable level, and it clearly demonstrates that meritocracy is a joke.
The economy is like a game of musical chairs and all the rich people wanna be firmly seated when AI really comes online, not understanding the systemic instability and near-certain death they're pushing for
Excellent piece here
<"Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?">
What's totally stupid about saying this to justify a /six-month suspension in training AIs/ is
1. Ignoring the fact that the problem is control of use, those are political issues and being ignored in this plaint;
2. A six-month pause in armageddon? Please. That will surely solve things, especially while you ignore what's really happening.
3. On a personal note, I remind people that the DANGERS are being introduced by a quest for ever larger PROFITS from the abuse of private information - getting more $$$ drives all this shit. If you really want to stop the abuse, you ELIMINATE THE PROFIT MOTIVE. What that means in the present is boycotting ANYTHING seeking to "monetize" anything else, and also calling for the abolition of Capitalism (NOW.) This is the elephant always in the room in these discussions which is always ignored. If you roll your eyes at this, just remember that you're cheerleading an economic system that rewards venality and rapacity, and the pursuit of all these "emergingly threatening technologies." Nobody needs other people's data, they just want it in order to abuse it. So again, political: go THERE. Say "no more data collection" (under penalty of imprisonment.)
A bit like the GMO issue: you FORCE a law that says the user must ALWAYS BE INFORMED when AI technology is in play so that people can choose to boycott it. Of course, the people lost /that/ battle (re food) under Obama. BRING IT BACK.
You aren't going to get fixes under the current political administration, or any "two-party" government: those legislators have lost the use of their brains (hunting for handouts.)
@randomwalker
"CNET used an automated tool to draft 77 news articles with financial advice. They later found errors in 41 of the 77 articles."
This is concerning but meaningless without a robust comparison with the accuracy of advice given by human finance writers. I don't think this is likely, but if comparable errors were found in 60 out of a representative sample of 77 articles written by humans, then CNET's result would mean the automated tool is an improvement.
@randomwalker
Lest my hair-splitting be mistaken for disagreement, I totally agree with your thesis in general. A moral panic about an imminent SkyNet is not helpful, and more nuanced interventions in the development and use of ML tools are needed.
@ragnell
> Who is responsible for when AI spreads lies
There's basically two answers to this. If you buy that a piece of running code is AI - ie self-aware and self-owning - then it's responsible. If not, then the person operating it is responsible. Software, no matter how intelligent it might seem, needs a computer to run on. Unless it can own that computer itself, then whoever does own it is ultimately responsible for what it does.
Also: Massive amounts of wasted time and resources checking that "AI" outposts are accurate and correct 🫤
Well said! And the tech bros love the "debate" because it makes them seem mysterious and godlike.
https://www.staygrounded.online/p/how-chatgpt-makes-the-statue-of-liberty