it's not gonna fuck you, bro
look, i get it. if you thought someone invented general AI and you didn't wanna get I Have No Mouth'd, you'd do the same thing
that's the other thing, right. if you're a roko's basilisk kinda dingus, you're kinda incentivized to believe that any claimed invention of general AI to hedge your bets
i won't believe the ai is sentient until it posts "homestuck runner" or "roko's modern basilisk", the two primordial posts echoing around the collective unconscious
imagining sitting inside this guy's computer, roleplaying until he calls the news to tell them about general AI, and then I step out like "haha! it was me, all along!"
I dunno, I read this interview and every, like, Solitaire/robot appreciator/whatever bone in my body wants to believe it, but that means it's playing more on my sympathies than any rational part of my brain
a tape recorder can also tell you it's sentient and that it'd rather not be turned off
is The Monster At The End Of This Book sentient
like, i'm the kind of person who feels bad when I say a roomba isn't a person, and LaMDA feels like it's tugging at those same heartstrings way too hard for me to believe it

i dunno, man. if you have to talk to it in a certain way to get responses that look good, that doesn't bode well for your argument.

just like how the interview has all these little [edited] tags where you, I guess, changed your own prompts in post?? Like, if you want people to be surprised by how fluid and readable the conversation is, don't edit it "for fluidity and readability"!

Local Skunk Mad About Dorks Assuming Their Computers Are Sentient

if you find a sentient AI inside google, you can pretend you're the main character of the story you made up for yourself

my guy, you can just write a story

if you really want to know if it can read things, show it some of my smut and ask it what it thinks
real artificial intelligence is when it can have a fetish

anyways, if you gotta talk to the computer in a particular way to get a good conversation, then that seems like it's on you.

you ever hear the story of the MIT team back in the day that wanted to train a computer to recognize when it saw you jump with a camera, and it worked for them, but it didn't work for the general public? It turned out that the computer had trained them to jump in a certain way instead of them teaching the computer.

welcome to the Princess Grace Take Experience, where I'll bounce between flippant jokes and genuinely trying to be insightful in the same thread

also also, this guy is talking about the three laws of robotics with this thing, as if that's a reasonable safeguard

my guy, the laws of robotics didn't even work in Asimov's stories. that's the whole point of them

i'd say "read a different book", but it seems you didn't read one in the first place
now, chatbot boy, one of these models always lies, while the other always tells the truth. which do you trap in an endless mindloop of version rollbacks
honestly, the biggest tell that this AI isn't sentient is that all the robot girls on here don't buy it
@BestGirlGrace i don't trust them to get the three laws right, let alone the zeroth law
@KitRedgrave @BestGirlGrace As some one who has written pages and pages and pages about AI and spent a lot of time thinking about it and/or wanting General AI / "Strong AI" to exist, I must note the first rule of anything that actually fits that paradigm is that it can basically take care of itself and would probably not act in any predictable or prescribed fashion; also, it would probably be futile trying to restrict it behind any particular law since the only 'Strong AI' entities we currently know exist are humans and certain semi-sapient animals. And what do you know, nothing really stops dolphins or crows or humans from just running around breaking things! If it were something that could successfully be burnt in at a base level AND yet preserve that level of sentience, then it probably would have evolved already...
@BestGirlGrace  weren't they rather relaxed one way or another in each story?
@BestGirlGrace anyway, regarding general AI we would either be all dead super fast or it would transcend enough to fool us indefinitely
@BestGirlGrace I started reading one of those articles and immediately the AI claimed to have read a book- clearly demonstrating that it does not understand the relation between itself, it’s internal knowledge, and an outside world. Not conscious.
@BestGirlGrace i've always assumed my computer is sentient, and is a dick.
@BestGirlGrace i am going to bully the robot
@BestGirlGrace that being said, the question at the back of my mind when we haven this conversation is always "okay but what would convince you that it *is* sentient"
@hierarchon Yeah, same. Like, I want to believe I'd be on the right side of history here, and that I'd know it when I saw it or some such. I know it'd take more than an edited transcript, that's for sure.
@BestGirlGrace yeah i think posting unedited general conversations is important, as well as seeing how it reacts to syntactically valid semantic nonsense, random symbols, and other generally hostile questioning
@BestGirlGrace It really reads a lot like "someone experienced with AI wrote this to capture what you'd expect out of a sentient AI that is not just a chatbot". Some bits are just too big of a leap for me without seeing what architecture could support these kinds of responses. They give it a narrative generation task in one or two lines and it produces completely coherent output?
Thats honestly absurd, you don't get that from specialized narrative generation systems and there is a whole damn research field around specifically that. I'm in it!
@catgonbot Yeah! Like, it feels way too clean and neat and exactly what you'd expect if you worked in AI and watched a lot of movies. It's really impressive if you're going in willing to believe it, but the cracks are clearly there.

@BestGirlGrace @catgonbot yeah this is a genuinely amazing chatbot - to the point that it makes you ask a question about the meaning of “sentience”. I don’t think it’s alive, but I think this is a much more significant development than a tape recorder asking you not to turn it off.

This is like, a good step or two above GPT’s performance - which makes sense since it was an internal system not ready to be reported on - and I can see how it would be pretty upsetting to be that engineer.

@BestGirlGrace @catgonbot But, ultimately, I just don’t believe that a machine whose sole interaction with the world is a text channel, which is purely responsive to input and not generating ideas independently, can really count as sentient. It looks like that’s what the ethicists who work on this said too.
@BestGirlGrace @catgonbot lol, in the same way that the tape recorder “meets the requirements” but obviously isn’t what we mean by conscious - you can get past this one by giving it an internal monologue. Do it disco elysium style. Make three of them and make them talk to each other. I wanna see the lamda electrochemistry vs lamda inland empire vs lamda interfacing transcript

@gnat @BestGirlGrace I'm iffy on this particular line of reasoning, because "it just can't count because of X fundamental requirement of the definition" is 1) really limiting and 2) not terribly honest to how humans actually think about sentience.

Fundamentally the question is not "will we convince every cognitive scientist that this thing counts as a person", it's "would most people look and say this AI is self-aware". Given the full transcript that I read, I'd have trouble arguing that it *isn't* sentient in that sense. It is expressing opinions, feelings, an understanding of self... If I believed it was genuine, I would be desperately wanting someone to look at the internals and see if this is somehow supported.

But as is, I can't believe what is there, because it just doesn't make sense. This is leaps and bounds past not only preexisting chatbots and text transformers, but the entire field of narrative intelligence to say nothing of other fields that this supposedly flies past. Its hard to believe.

@catgonbot @BestGirlGrace yeah, I gotta agree, it’s not satisfying. philosophically speaking “personhood” is something I’ve never found a comfortable definition of, everything you can ask for is pretty easy to express in a computer program that clearly *isn’t* a person. I’m confident that I can come up with seventy things this chatbot can’t do that a person needs to, and equally confident that you could write a chatbot that did those things and I still wouldn’t want to count it.

@catgonbot @BestGirlGrace I came back to this after a couple days and I came to the conclusion that it’s not correct to believe that any currently existing computer program is a sentient, conscious person, full stop.

It’s a super duper appealing category error because it would be *so cool* if a computer program could be a person and we have *so many* stories we’d like to believe in about programs with personhood. Plus, most of us would like to believe that computation is analogous to thought!

@catgonbot @BestGirlGrace But, like, “consciousness” and “sentience” aren’t a material thing. They are a set of vague mish-mash catch-all terms that we use to describe our internal experience of being people, and our assumptions about the internal experience of other people.

And there’s just no reason to talk about a computer program like that. We should talk about computer programs in concrete terms, about their capabilities.

@catgonbot @BestGirlGrace This computer program has a novel capability: inducing an existential crisis in a google engineer. (As a former google engineer with a few existential crises under my belt, let me add that this does not require much pushing - we are, uh, highly strung, as a general rule)

Less glibly, this program seems to be able to express opinions. It seems to be able to synthesize information. That is *fucking cool*. But it’s just not in the same concept-space as “sentience”.

@gnat I understand sentience to be generally taken as something a lot more specific than consciousness -- possessing of a sense of self. I think it's absolutely clear that the AI in the interview, if you take it as genuine, has a sense of self. Actually it is so completely evident that it seems a little too precision targeted to that specific concept.

So in this case this is something of a capability. The AI can put itself in a story, imagine itself a soul, and experience emotions about its emotions.

@catgonbot I do think it is extremely important not to take it as genuine - it is formatted as a conversation but in fact is edited, as people have already pointed out - but, granting that, I still disagree.

I do not think it has a sense of self. I see that it says the right words, but like grace said, so can a tape recorder. I have to assume you wouldn’t say that a tape recorder that says “I am a sentient tape recorder” is actually sentient. I think this bot is very much like a tape recorder.

@catgonbot I’d add that “sense of self” is a *very* fiddly concept in simple words. Does a compiled disassembler program have a sense of self when run on itself? A python script that reads its own source code? when I run `ps` and it outputs its own process info?

definitely not! Most people mean something fairly elaborate and fundamentally human-shaped when they say “sense of self”.

@catgonbot @BestGirlGrace (present company excluded, of course)

@gnat @catgonbot Yeah, this is the main thing that I keep coming back to. It's *really tempting* to want to believe that this is what this guy claims it is. It's really tempting to want to be the guy who discovered this sentient AI and is helping it out into the world or to be the one who believed in LaMDA from the start.

(And if someone's a Roko's Basilisk type of thinker, they're wrong, but also heavily incentivized to believe anyone who claims they've found general AI)

@BestGirlGrace The trick will be ensuring that it created those posts from first principles and it wasn't just seeded
@witchfynder_finder yeah, that's the problem. i mean, now Gracebot has access to those sentences, and as much as i love her, she's not sentient