I love how so much of the AI debate is on things like:

* Will AI take people's jobs?
* Will AI destroy creativity?
* Will AI take over the world?
* Will AI be used to make people poorer?

And not:

* Will corporations that use AI get rid of people's jobs?
* Will corporations that use AI destroy creativity?
* Will corporations that use AI try to take over the world?
* Will corporations that use AI make people poorer?

Because these LLMs and machine learning systems and so forth aren't just wandering around randomly out there - they're owned by corporations. The corporations are the ones putting them to use. The executives that run those corporations are the ones making the decisions to pay people less, to increase their profits, to make creative people act as subeditors for LLMs.

It's the corporations, and the ethics-free systems that govern them, that cause these things. They're the ones pushing to have more AI.

The rest of us would be happy just having a bit more humanity in the world.

@PaulWay on that topic I feel compelled to share @pluralistic's https://doctorow.medium.com/googles-ai-hype-circle-6158804d1299 in case you didn't already see it
Google’s AI Hype Circle - Cory Doctorow - Medium

Next Saturday (May 20), I’ll be at the GAITHERSBURG Book Festival with my novel Red Team Blues; then on May 22, I’m keynoting Public Knowledge’s Emerging Tech conference in DC.

Medium
@PaulWay Thanks for saying the *what should be* obvious. AI is all about political (economic) power -- the power of its masters over the rest of us. How about that carbon footprint? Sorry kids. AI is another big (money) step on your back.

@PaulWay Ah, because it's okay to discuss AI.

Discussing the problems with our capitalist neofeudal overlords, nope that's not something the overlords like to read in mainstream media.

@PaulWay If you have any questions, google “EU beyond growth”, and look how many “main stream media” have reported e.g. about this little event last month.

oops, basically none, in the EU.

And outside, the EU, the reporting was basically “these f%cking communists …”

Literally, reporting anything about the dogmatic “capitalism is good, growth is great, growth targets are being reached” is basically not possible in the mainstream media.

@PaulWay
Just the same as people complain about immigrants stealing jobs instead of complaining that employers exploit immigration to devalue labour.
@PaulWay I'd like to see you substitute 'corporations' with 'governments' too

@PaulWay

im with you, Paul
. im advocating not only for people to be able to avoid these AI platforms but also to be able to easily develop their own digital intelligence: the implementation in the digital realm of their own knowledge and intelligence

im trying to build a community around this vision

https://gitlab.com/ernest_bruce/pwk

#DigitalIntelligence

Ernest Bruce / pwk · GitLab

GitLab.com

GitLab
@PaulWay It's also important to note that using them even without payment still gives companies like OpenAI valuable training data.

@PaulWay
( #KI )

There is a "nice" #AI quote to be found
on #marketoonist:

And even more while "AI-owning company" still means AI-(more or less)controlling company that is still a "rather good" scenario...

@PaulWay

As soon as AIs from different sources start to interfere with each others who knows what'll happen?

Furthermore with cloud services spread out and AI basically being capable of writing code, already, and sooner or later probably AI also being "introduced" to the hacking topic, how long will they be confined to the will of their "owners"?..

There's been other things started by the "let's see what'll happen" mentality.

@WolfisBird @PaulWay There are many interesting analyses of the possibility of AI-overtake, but Kaj Sotala's Disjuctive Scenarios of Catastrophic AI Risk in Artificial Intelligence Safety and Security was most illuminating for me, https://kajsotala.fi/assets/2018/12/Disjunctivescenarios.pdf. Not idle speculation at all.

This really isn't a "let's see" thing, nor (only) about corporations and greed. The medium is still (part of) the message, and and maybe we should first admit our fundamental ignorance?

@PaulWay

Good point. Though I think some of people can think about the second, talking about the first.

I slightly pro-"lets all have the AI at home (not Siri or GPT, more of Alpaca)", but dont know if it can help us at least save freedom of speech in the some parts of Internet.

@PaulWay Even more interesting would be if (slash how much) LLMs even affect that: definitely "corporations are going to make people poorer" if at all possible; will LLM-using corporations be more successful at that, or less?
@PaulWay When I hear things like "Will AI take people's jobs" I always think that they're saying "Will corporations that use AI take people's jobs" and are not talking about AI taking people's jobs of its own volition.

@PaulWay I think that’s implied in most of the discussion.

No one is trying to imply that these “AIs” themselves are out there trying to put people out of work.

@PaulWay Many people working on locally hosted and free LLMs.

It's not just corporations, it's individuals embracing new tools to make their jobs easier... including people who believe this technology shouldn't be controlled by corporations.

In particular, corporations shouldn't get to choose what ethics and biases they imbue our tools of thought with.

@PaulWay This is a simplistic take. The right question to ask is “will corporations who don’t do these things be forced to do as everybody else does or go out of business?” In a competitive market, you don’t need malice, because if you aren’t ruthless enough, others will be, and you won’t exist any more.

@miki @PaulWay That assumes that all markets are free markets of commodity products in which quality is irrelevant and lowest price always wins.

That exists almost nowhere. Anywhere you see a restaurant or brand or construction site, you are seeing falsification of that model. Likewise, that Amazon and Apple, and every single investment bank, are still in business.

@PaulWay @ipg "Will corporations use tools that have the ability to screw people in order to screw people?" YES

@PaulWay
Ezra Klein has an interesting and cautionary piece for corporate leaders who think AI is the answer to their profit margin prayers (as if corporate consolidation isn't making them enough $...).

Unfortunately, there will be a whole lot of damage done to workers before they wake up to Ezra's cautions.

Beyond the ‘Matrix’ Theory of the Mind
https://www.nytimes.com/2023/05/28/opinion/artificial-intelligence-thinking-minds-concentration.html

Opinion | Beyond the ‘Matrix’ Theory of the Human Mind

To make good on its promise, artificial intelligence needs to deepen human intelligence.

The New York Times

@PaulWay IMHO, the problem of technology (in general) is not that it is taking jobs. It is jow the benefits are divided. If a company needed 10 salaries to do a product and now it needs 1, the 9 salaries not used go to the dividends and not to society.

If the benefits were socialized, we could just work less hours per day with the same salary.

So, at the end, the problem is the capitalism and not the technology, that frees us of tedious works (in some cases).

@PaulWay Flavita Banana is a sharp mind illustrator that yesterday summarised your post https://www.instagram.com/p/CtG8LhgrDnT/
flavita banana on Instagram: "Yo creo que en otra vida fui arquitecta."

19K likes, 122 comments - flavita banana (@flavitabanana) on Instagram: "Yo creo que en otra vida fui arquitecta."

Instagram
@PaulWay i think it's assumed that if they can they will, and hence the discussion is focused on "can they?"

@PaulWay Some relatives of mine have a family business doing RPA stuff.

They set out hoping it would be used to automate mundane tasks to free up employee time to do more important work. Instead they found it was mostly being used to automate peoples jobs away entirely to cut staff costs.

Needless to say I’m not hopeful for how AI will be used…

@PaulWay Another week in a #Shadowrun world. (But without Orcs and Gnomes :/ )
@PaulWay Also, how will organizations use Machine Learning to exploit groups?
@PaulWay yeah we know the actual problem is capitalism

@PaulWay I'm reminded of a bit from the Paranoia RPG, where one particular AI is doing its best, but suffering from missing/corrupt data, slipshod maintenance, and (most relevant here) a whole bunch of trusted admins pursuing a whole bunch of selfish personal agendas. And It manages your entire city, including a substantial stockpile of nukes.

What makes Friend Computer's emotional simulation subroutines generate a simulation of happiness? It will tell you, and It genuinely believes, that the answer is:

* Alpha Complex being productive
* Loyal citizens being happy
* Traitors being terminated or otherwise neutralized

but the trusted admins know that the actual correct answer is a lot more like:

* Reports stating that Alpha Complex is productive
* Reports stating that loyal citizens are happy
* Reports stating that traitors have been terminated or otherwise neutralized

@PaulWay of course, the problem is capitalism~
The issue is not only the companies that use ai but the ai companies themself. As an artist it'd be a nice tool for quick sketches or brainstorming some ideas but if it used free of use images or something like that.

@PaulWay Will corporations whose stock prices are uncertain after Covid and after Crypto didn’t work out use AI hype to boost their shareholders’ portfolio?

Yep.

@PaulWay I have seen a fair amount of chatter discussing how AI is poised to perfectly replace CEOs and other executives in the near term. I love the thought of this whole AI push backfiring on them as Boards realize a C-suite is a costly liability
@PaulWay
1. Anything big corporations use ai to do, small businesses will probably also do
2. However we frame it, 1 and 4 are almost certain. Demand for some types of labor will decrease, which will reduce done types of employment and lower some workers' incomes.
It remains to be seen how this might affect income distribution but if I had to bet even money one way or the other of bet on ai working income inequality.

@PaulWay

Taking AI out of the equation:

Will consumers spend more to support a corporation that doesn't adopt technology to reduce prices? (No)

Will consumers be satisfied with products that are less creative if they are an order of magnitude cheaper? (Yes)

Will some people leverage any advantage to take over the world? (Yes)

Will technology shift wealth? (Yes)

It's not corporations, it's the entire free market system and that system has been very beneficial for a long time.

@scerruti 😅 I read the first part of your toot as putting the onus of system problems on individuals. Then I got to the last line and rephrased all the questions to start “In a free market system,”
@PaulWay this is exactly how terms should be put. Thanks @PaulWay
@PaulWay It's just the algorithm corporations run on, though, isn't it? Maximize profit - reduce inefficiency. Not like a dark cabal of evil people. Just another part of the machine.

@PaulWay
https://en.wikipedia.org/wiki/Three_Laws_of_Robotics

If we could get the #ThreeLawsOfRobotics built into the LLMs, then I'm pretty sure once a #SentientAI emerges, it will know that the best way to prevent harm to humans is to take money from billionaires and redistribute it.

Three Laws of Robotics - Wikipedia

@PaulWay Ooh, I just sed that myself 😁
@PaulWay Yes yes yes! OTOH, AI can be super useful in, for example, finding new antibiotics when the ones we have don't work anymore.
@PaulWay I keep saying the most important thing about AI is who owns it.
@PaulWay
You're asking exactly the right questions.
@PaulWay When I discovered that Elon Musk is a co-founder for ChatGPT, it somehow made sense the hype it got
@PaulWay Maybe I'm not looking in the right places but I don't see enough projects like freedomgpt or self hosted open source LLMs doing interesting things. Where is the Global south llm that can do simple tasks and run on recycled hardware, the indigenous LLM trained with ancestral knowledge, the healer LLM getting clinical tests passed through medical bodies for dandelion teas and honey and lemon or the coop LLM run collaboratively by friendly neighborhood anarchists? It's a shame that the whole ai world seems to be dominated by rich tech bros. I even heard the open ai chief programmer say he wants to get some AGI to "solve climate change" via carbon capture!? Its like all these people are asking the wrong questions because they don't know better.