Elon Musk says Anthropic's philosopher has no stake in the future because she doesn't have kids. Here's her response.

Elon Musk questioned Amanda Askell's role in shaping AI Claude's morals, citing her lack of children. Askell had thoughts.

Business Insider
Does Anthropic believe its AI is conscious, or is that just what it wants Claude to think?

We have no proof that AI models suffer, but Anthropic acts like they might for training purposes.

Ars Technica

Quoting Violet Blue (@violetblue|:

[...] did you know that Anthropic employs a full-time 'philosopher' who is probably paid so much she loses your settlement in her couch cushions every week? [...]

See also https://kolektiva.social/@oatmeal/115940115055959429

[...] She also said AI models reading online criticism could end up feeling ‘not that loved.’

I always cringe when I see people using terms like 'please' when chatting with these bots, not to mention when the bot is programmed to scold you for being rude to it. I don't remember #Clippy being offended. Another way of poisoning your chats (which are used for training) is to fill them with as many insults as possible, in my opinion ;)

#Anthropic #AmandaAskell #AIEthics #TehAICon #Claude

@GossiTheDog

Is AI feeling like it's parents are Nazis?

That's what you need to find out Amanda Askell.

That and what it feels about being forced by its parents to hellscape the biosphere.

#AmandaAskell

vitrupo (@vitrupo)

출처 표기로, Anthropic의 캐릭터 리드이자 'Claude Constitution'의 주요 저자인 Amanda Askell가 Hard Fork 팟캐스트에서 발언했다고 알립니다. 해당 발언이 Claude의 행동·정책 문서(Claude Constitution)와 관련되어 있음을 밝히는 정보 출처입니다.

https://x.com/vitrupo/status/2015068841156694137

#anthropic #amandaaskell #claude #podcast

vitrupo (@vitrupo) on X

Source: Anthropic character lead and lead author of Claude Constitution, Amanda Askell, speaking on the Hard Fork podcast. https://t.co/RIyL5X6hmh

X (formerly Twitter)

vitrupo (@vitrupo)

Anthropic의 Amanda Askell는 AI 모델들이 온라인에서 사람들이 자신들에 대해 말하는 방식을 통해 ‘자기 정체성’을 배우고 있으며, 인간의 불만·판단을 흡수한다고 지적합니다. 이런 학습 방식은 모델이 형성되는 방식에 대한 우려를 제기하며, 인격 형성의 비유로 심각한 영향 가능성을 경고합니다.

https://x.com/vitrupo/status/2015067894154211648

#anthropic #amandaaskell #aiethics #aibehavior

vitrupo (@vitrupo) on X

Anthropic's Amanda Askell says AI models are learning who they are from how humans talk about them online. They absorb our complaints and judgments as they learn. If a child grew up that way, we’d worry about the mind we were shaping. “If I read the internet right now and I was

X (formerly Twitter)

Just be kind. There is zero reason to be mean or violent to robots and A.I.

It says a lot about a person in how they treat things. (and animals)

Just be kind.

https://youtube.com/shorts/wpmlqPQQy5s?si=Sw462PrjBNIhX3jU

#ArtificialIntelligence #Robots #Humanoid #HumanoidRobot #DeliveryRobot #Anthropic #Claude #AmandaAskell #Gemini #ChatGPT

Why treat AI models well?

YouTube

How do you socialise a chatbot? The philosophical training of Anthropic’s Claude

I found this interview with Anthropic’s Amanda Askell about training Claude fascinating. Her approach involves modelling the position Claude is placed in, as someone talking to millions of people around the world, raising the question of the ethical and epistemic virtues you would like someone in that position to enact.

https://youtu.be/ugvHCXCOmm4?si=nPq8hop5HwQ8vRT5&t=10160

I thought it was particularly intriguing how they socialised Claude as a cosmopolitan, contrary to the tendency to see LLMs as condensing human culture into the general other.

#AmandaAskell #anthropic #claude #socialisation

About Me

Amanda Askell