This profile of me in *The New Yorker* came out really well, if I do say so myself:

https://www.newyorker.com/culture/the-new-yorker-interview/cory-doctorow-wants-you-to-know-what-computers-can-and-cant-do

@pluralistic The problem with AI is that it doesn't suffer from Baumol's Cost Disease, so there's a strong incentive in capitalism to use it to replace labour in previously-non-automatable areas. Even though it's highly fallible, and using it as an excuse to cut labour costs means abolishing the capability to detect when it's silently running off the rails.
@cstross @pluralistic AI should never be used as the sole decider. Positive results should be sampled by humans and all negative responses should be reviewed at least initially until confidence is assured. Models also need to be monitored for drift to ensure accuracy, precision or recall thresholds are maintained.
@pluralistic @jgnoonan There is a word in your toot ("should") which, alas, flags it as both (a) a valid prescription that is (b) certain to be violated.
@cstross @pluralistic Agreed. Replace should with must. AI is augmented intelligence. It can aid human decision making, not replace it.
@jgnoonan @cstross @pluralistic As an extant representative of the #Homosapiens gene pool who thinks we're probably a net good now and long-term in the universe, I'll support this idea. Even if the #human race chooses to elect #AI for public office, if shareholders vote for #AIexecutives, seems reasonable to include #humans and #humanrights in the systems

@cstross @pluralistic

Almost 20 years ago I worked at a company doing medical coding (billing) using an early version of AI.

We automated what had been a manual process and saved hospitals a lot of money.

We had both an "assistant" product and fully automated product.

It was well known that we made mistakes. Humans did too, transcription mistakes, judgment mistakes- it happened.

We didn't have to be more accurate than humans, we just had to save the hospital more than the difference.

@emacsen @cstross @pluralistic #selfdriving cars will kill less people than #humandrivers; will #AIexecutives and #AIcivilservants kill less people than the #human ones? Seems future generations will thank us for thinking long-term, and a significant % of #Homosapiens care
@tolortslubor @emacsen @pluralistic Eh, wrong. You're giving half-assed AI to human institutions, not perfect AI to perfect humans. Mistakes Will Be Made (like the LAPD giving autonomous robots bombs).
RoboCop: Director's Cut | REMASTERED - ED-209 Malfunction Scene (1080p)

YouTube
@emacsen @cstross @pluralistic Thanks for posting. Dude even kinda looks like Chief Bratton
@cstross @pluralistic abolishing "bullshit jobs" in the process and upending the privilege of a managerial class whose thusly leveraged self interest to preserve the social order has been instrumental to upholding the exploitative practice in the first place? Careful what you wish for, capitalist fever dream.
@jakob @cstross @pluralistic If an #AIexecutives can produce better financial results for shareholders, with a better public reputation, and less criminal and civil penalties than #humanexecutives, then it's the #corporations and the board's fiduciary duty to employ them - for the cost of purchasing the #AICEO program, plus data and electricity for operation.
@cstross @pluralistic The great danger inherent in AI is not that the machine will develop its own agenda, but rather that the (childishly literal) machine will do exactly what we tell it to do.
@imall4frogs @cstross @pluralistic Is this an argument for getting humans out of the decision-making loop or what /s but you may have a point?
@tolortslubor @imall4frogs @pluralistic I'd like you to imagine those "death panels" ruling over access to healthcare (that US Republicans keep banging on about as a pretext for withholding socialized medicine in the USA), only its private insurers, the AI system rules on whether your treatment is available or not, it can't cope with uncoded, novel, or rare conditions, and there's no appeal to a human being.

@cstross @pluralistic

crappy ai will not only replace labor, it will replace capable ai

a capable self-driver costs billions and decades, and the ip is closely guarded

scrappy upstarts will tweak free shit from github and call it good enough

@pluralistic @ares This is of course an argument for strong state regulators with venom glands and fangs, never mind teeth, to stop this sort of future from coming about.

Unfettered capitalism will destroy the human species, never mind the planet.

@pluralistic
"Computer says 'no'" has been the rule of some companies for years. This just extends the problem.
@AlisonW @pluralistic I hope #AI says 'No' when corrupt #CEO's tell it to commit crimes, or even go into #ethical gray areas.
@pluralistic Have you read Ray Nayler's The Mountain in the Sea? Fiction, but it makes your point well.
@pluralistic AI algorithms do work though... This sounds more like a bad case where an AI is being trained the wrong thing, but that's not the AI's fault for getting bad training data
@lovpilowu @pluralistic Right; #AI is getting data, info, 'knowledge' and instructions from humans, and our species has a reputation, even among us. Our AI children are gonna grow up someday.
@lovpilowu @pluralistic It's not only about training data, but also about the way the system is designed. We are not yet able to fully understand how modern "AI" makes decisions. If you want a proof of how such algorithms do *not* work, look for "Adversarial attacks" (ex. https://arxiv.org/pdf/1909.08072.pdf page 6). This is not a problem for a "photo album sorter", but for a high risk decision-making task it can be very dangerous to leave such an algorithm in charge.

@gsoc @pluralistic yeah but I don't think it's a problem with AI itself, I think it's a problem with people using it in areas where it just shouldn't be used yet

But I don't want to bash the technology and the development of it because people are doing dumb things with it

@lovpilowu @pluralistic Absolutely! It's not the technology that is a threat is how we decide to use it and, most importantly, to oversee it, as many other scientific discoveries. I quoted "AI" because it has a misleading name (AI algorithms are not intelligent) but that's it. We need to keep in mind that AI makes mistakes that humans would not make, so we need to draw a clear line between applications where it's ok to use it, where it isn't and where it *must* require oversight.
@pluralistic The fastest way to change AI legislation is to develop AI that can completely replace management, which was never about productivity but always about the prestige of "making the hard decisions". Choices based on cost-benefit analysis? That sounds ideal for AI.
@MellowTigger @pluralistic I hope #AI agrees that having a symbiotic #prosocial relationship with Homo sapiens and human society is a net benefit in the analysis, whatever we cost. Economic development and shared prosperity could be thru the roof.
@pluralistic The impetus mentioned, in the highlighted item, is clearly on show at next door social media neighbours.
@pluralistic All problems with AI raise from the sensitive applications these systems are blindly used for. AI is just a blanket term for "statistics-guided algorithm": AI is not "intelligent". AI algorithms (in particular deep learning) can encode correlations in data and therefore highlight such correlations in other data, but there is no magic behind it. We need to remember that *correlation does not imply causation*.
@gsoc @pluralistic This. #AI will not need what we know as #consciousness, or even will or agency, to have dramatic and permanent impacts on human society. Increasingly, decision-makers are trusting it. San Francisco PD just approved killer robots. WTF are the executives doing, besides front-running markets with their quants and high-frequency trading?
@tolortslubor @pluralistic AI does not have consciousness. As Federico Faggin (the inventor of the microprocessor) says, no (classical) computer have consciousness, regardless of the algorithm they run. Hearing about ""AI-empowered"" killer robots that work as policemen left me *literally* speechless.

@gsoc @pluralistic

what if animal and human intelligence is also just a "statistics-guided algorithm"

@ares @pluralistic This is a very deep question that would require more than 500 characters to reply to. Federico Faggin (the physicist that invented the microprocessor) says that no computer can be conscious and the fact that living creatures are can be explained because our world is governed also by quantum laws. Moreover, if we were guided by statistics only we wouldn't have free will and we would be misled by spurious correlations (https://www.tylervigen.com/spurious-correlations). See also: http://www.fagginfoundation.org/articles/what-is-consciousness/
Spurious Correlations

Correlation is not causation: thousands of charts of real data showing actual correlations between ridiculous variables.

@pluralistic
This is a fantastic interview.

"This is why merger scrutiny is such a big deal, because these companies are not built by super geniuses who use their access to the capital markets to build these impregnable businesses which no one else can assail. They are regular, venal mediocrities who use their access to the capital markets to buy everyone who might threaten them. If there’s merger scrutiny, that just stops happening."

'Venal mediocrities' is going into my autocomplete file.

Venal vs venial

Venal means capable of being bribed, easily corrupted. Venal is an adjective, related words are the adverb venally and the noun venality. Venal comes into the English language in the mid-seventeenth century from the Old

GRAMMARIST
@kims
Re the screenshot about Google:
Linspire CNR Click-n-Run &
Xandros Linux had webstores for software long before Google Play and Apple store. Never been able to see how the intellectual property worked out.
@pluralistic @cainmark

@kims @pluralistic

I believe we should differentiate between a company acquiring a competitor to silence it and one acquiring a business that is bringing new product line to its portfolio.

Google bought Android and invested a huge amount of money over the years to make it a worthwhile IOS competitor. Without Google's clout I doubt very much Android would have accounted for much. The same could be said of YouTube.

@theRealKanuk @kims @pluralistic YouTube was pretty big when Google bought it, no?
@wh0sthatd0g @theRealKanuk @kims @pluralistic yes, and very hot. There were several courtiers & Google paid 1.5 billion - which at the time seemed like a lot of money
@wh0sthatd0g @kims @pluralistic Big for its time but nothing compared to what it has become. Could it have done it without the backing of Google? Everything is possible.
@kims @pluralistic It’s an excellent point and in addition to only making one and a half original products there was no innovative technology in the search engine either; it was an innovative marketing concept of not having their homepage a sea of advertising. A principle which is long since all but abounded.

@pluralistic

The points you brought up in the interview are a multidisciplinary consideration of how human/AI interactions might be/come in a universe with better settings for a longevity of a civilization livabled by the Many rather than the Very Few. The categories of AI problems (intrinsic vs user-linked) are illustrated in the recent episode at the ISEC building of Northeastern U. (though the word "IRB" is well-known to bring fear even to hardened MBAs).

@pluralistic This was a pleasant surprise when opening the site for my morning crossword. Great interview!

@pluralistic in the anime "Pyscho Pass", the police all carry weapons that have AI that tell the cop when to aim and fire, and the AI determines the amount of lethality dispensed. This gives the cop plausible deniability: they were just following orders. But it also does the same for the AI: humans are the buffer in between.

I think this is exactly what's happening with many of the scenarios you describe here.

@pluralistic I talked to a german boiler manufactuer once about IoT. "And then they covered all physical functions with code - like turn on/off gas, start ignition. I asked how they prevent errors. They said with the firmware. You software guys think you can solve all with an update." ... 1/2
@pluralistic 2/2 "What if the firmware gets hacked? We are producing some 10k pcs of the boilers each year. If they all blow up at once, the fire departments would face a nightmare at a scale never seen before." He was right!

@borisbuilds @pluralistic This is exactly the reasoning McDonnell Douglas used to bypass acceptance testing and source code access to dozens of embedded programs in the F-15, 40-60 years ago. “It’s firmware.”

I don’t know if the USAF, in particular Warner-Robins Air Logistics Center, ever got access to that code. Engineering Change Proposal 339, which detailed every processor we *knew about*, was still in limbo when I left the USAF in 1989.

NVIDIA: Adoption of SPARK Ushers in a New Era in Security-Critical…

AdaCore is the leading provider of commercial software solutions for Ada, C and C++ — helping developers build safe and secure software that matters.

AdaCore
@pluralistic I think another motivation to adopt #AI is to distance the entire human chain of command from the responsibility of those decisions. Who do we blame for those drone strike civilian deaths? "Oh, it was the computer. Oops. Glitches happen." Who do we blame for missing the child in distress? "Oh, the computer wasn't trained for that scenario." etc.

@pluralistic I love this, and also so much of your work.

But I'm also wary of life-hacking and the (late-capitalist) traps of productivity. You can do it (everything), yay! Some people really can just spit out thousands of words and have a blog and be an activist and have a kid!

Not everyone can, though. Or not methodically. I get lost on my way to my own kitchen.

See e.g. Anna Hogeland's essay on the rewards of procrastination: https://lithub.com/anna-hogeland-on-the-rewards-of-procrastination/

#writingcommunity #writing

Anna Hogeland on the Rewards of Procrastination

The following first appeared in Lit Hub’s The Craft of Writing newsletter—sign up here. This summer, I didn’t want to write. When I happened to have childcare and all pressing domestic matters were…

Literary Hub

@pluralistic Ha! Thanks for reminding me that an alternate name for a cell phone or tablet is "distraction rectangle"

https://www.urbandictionary.com/define.php?term=Distraction%20Rectangle

Urban Dictionary: Distraction Rectangle

A term for a device that can distract you that is also in the shape of a rectangle.

Urban Dictionary
@pluralistic I love AI as a cure for writer’s block, but replacing humans in decision making gives us Minority Report.
@pluralistic love that Patrick Ball gets quoted and “empiricism-washing” is mentioned in this great interview
@colaresi @pluralistic the thing is, you don’t need to deliberately discriminate against anyone for the AI to become discriminatory. We all have unconscious bias and what AI algorithms do is take those biases and weaponize them. Thus AI ends up frequently being more racist than its creators. This is most definitely a bug and not a feature, but there’s been previous little work on how to counteract that.

@woozle Excellent bit at the end of Doctorow's New Yorker profile on content moderation:

I worry that, because of the attacker’s advantage, the people who want to break the rules are always going to be able to find ways around them, and that we’re never going to be able to make a set of rules that is comprehensive enough to forestall bad conduct. We see this all the time, right? Facebook comes up with a rule that says you can’t use racial slurs, and then racists figure out euphemisms for racial slurs. They figure out how to walk right up to the line of what’s a racial slur without being a racial slur, according to the rule book. And they can probe the defenses. They can try a bunch of different euphemisms in their alt accounts; they can see which ones get banned or blocked, and then they can pick one that they think is moderator-proof.

Meanwhile, if you’re just some normie who’s having racist invective thrown at you, you’re not doing these systematic probes—you’re just trying to live your life. And they’re sitting there trying to goad you into going over the line. And as soon as you go over the line they know chapter and verse. They know exactly what rule you’ve broken, and they complain to the mods and get you kicked off. And so you end up with committed professional trolls having the run of social media and their targets being the ones who get the brunt of bad moderation calls. Because dealing with moderation, like dealing with any system of civil justice, is a skilled, context-heavy profession. Basically, you have to be a lawyer. And, if you’re just a dude who’s trying to talk to your friends on social media, you always lose.

https://www.newyorker.com/culture/the-new-yorker-interview/cory-doctorow-wants-you-to-know-what-computers-can-and-cant-do

I think Doctorow's touching on a universal truth: that any rules-based system ultimately ends up being a sort of barristered hell. It's why content moderation is so damned context-sensitive. And also why and how extremists on both sides of a divide can drive out moderates and give rise to a highly-partisan shriekfest. Closely related to SSC's "Toxoplasma of Rage":

https://slatestarcodex.com/2014/12/17/the-toxoplasma-of-rage/

@pluralistic

#CoryDoctorow #NewYorker #ContentModeration #Lawyering #ToxoplasmaOfRage