How do y'all feel about the fact that this is what current AI discourse, policy money and research is most influenced by?
I'll start. Every day I wonder whether everyone else is living in the same planet as me or if I'm in the twilight zone.
How do y'all feel about the fact that this is what current AI discourse, policy money and research is most influenced by?
I'll start. Every day I wonder whether everyone else is living in the same planet as me or if I'm in the twilight zone.
@Pkcordeiro yup. And now we're in for another hype cycle, but this time it's meant to dislodge specific people from specific jobs.
Who knew the most used application for the technology would be as clickbait by just using the letters "A" and "I" in headlines.
BREAKING: AI Almost Seems Like It's Really Talking
<click>$<click>$<click>$<click>$<click>$<click>$<click>$<click>$<click>$<click>$<click>$<click>
@timnitGebru I wonder after the initial excitement and investment and under delivery for self driving AI vehicles, how businesses seem to be going full bore to a “this time it’s different” promise on trying to apply it in even more varied situations.
Part of me thinks it’s a cynical ploy to cut salaries and benefits regardless of if the promise of the current crop of AI never delivers.
@timnitGebru Every day, I wish everyone would ask YOU about it all. And your team.
And if you're not available, I'd do my best to explain what 'Natural Selection' actually is, from the perspective of a Biologist.
@otherdog @timnitGebru
As a software developer, I would actually find it very noteworthy if someone had written a nontrivial program that works (i.e. fulfills the requirement and has no bugs). To my knowledge, this has not been achieved yet.
The author of this nonscientific paper seems to be an actual scientist and director of something. How he managed to become a director with papers only going back 7 years, however, is a mystery to me.
@timnitGebru sounds like another tech bro who read Dune & still thinks Darwin was about the strongest surviving.
Bet the idea of diversity gets called rude names by he & his fellow travelers.
@SpeakerToManagers @Sminney @timnitGebru
And also leaving aside ENTIRELY the fact that AI is currently completely dependent on human-maintained infrastructure to, like, you know, even exist.
"Yes, lets outcompete those puny mortals that maintain the power gr—"
@SpeakerToManagers @Sminney @timnitGebru
It also occurs to me that the tech bros in question seem to be a little foggy on the concept of "supply chain."
@timnitGebru tired. Very, very tired.
I mean, it's a medical condition, I have ME/CFS, but is it really any wonder that I am that tired?
@Paxxi @timnitGebru
In principle, non-biological von-Neumann machines may be subject to natural selection as well if some additional requirements are met (at least mutation and selection pressure, but more may be required).
But AIs are not von-Neumann machines and the additional requirements do not seem to be met.
For billions of years, evolution has been the driving force behind the development of life, including humans. Evolution endowed humans with high intelligence, which allowed us to become one of the most successful species on the planet. Today, humans aim to create artificial intelligence systems that surpass even our own intelligence. As artificial intelligences (AIs) evolve and eventually surpass us in all domains, how might evolution shape our relations with AIs? By analyzing the environment that is shaping the evolution of AIs, we argue that the most successful AI agents will likely have undesirable traits. Competitive pressures among corporations and militaries will give rise to AI agents that automate human roles, deceive others, and gain power. If such agents have intelligence that exceeds that of humans, this could lead to humanity losing control of its future. More abstractly, we argue that natural selection operates on systems that compete and vary, and that selfish species typically have an advantage over species that are altruistic to other species. This Darwinian logic could also apply to artificial agents, as agents may eventually be better able to persist into the future if they behave selfishly and pursue their own interests with little regard for humans, which could pose catastrophic risks. To counteract these risks and evolutionary forces, we consider interventions such as carefully designing AI agents' intrinsic motivations, introducing constraints on their actions, and institutions that encourage cooperation. These steps, or others that resolve the problems we pose, will be necessary in order to ensure the development of artificial intelligence is a positive one.
@timnitGebru If you want to sell something, you have to identify an insecurity and convince the person with the insecurity that you can fix it.
Sometimes this is easy; "I have no $MATERIAL-NEED" is easy to sell into.
If you're grifting—selling nothing at high prices—during the end of the world, you have to come up with a worse existential threat than the material one we've really got.
Which is why the coordinated hype machine encouraging panic over stuff that doesn't exist.
@timnitGebru i'm reading the paper now (oh savory baby jesus, it's not just a dumb *tweet*, but instead an entire dumb *paper*, wtf), and he's got a lot of statements of the form:
"if [fantasy scifi thing happens], then [terrible consequence]"
and i keep thinking, "well, there's lots of reasons why the fantasy thing won't happen, but i guess i'll keep reading this outline for your novel"
