"How any AI system behaves depends on how it is programmed and prompted and what data it possesses. Anthropic set Claude a goal, then blocked all ethical ways of achieving it, leaving “blackmail” the only option. As its own report acknowledged, “We deliberately… presented models with no other way to achieve their goals”. Researchers determined a particular outcome from the start and then acted surprised when the machine “chose” that outcome.
A critical paper from the UK AI Security Institute compared such research to early investigations into the linguistic capacities of chimpanzees, researchers in both cases imputing “beliefs and desires to non-human agents… when they act in ways that superficially resemble people”. It chided Anthropic researchers for having “conveniently encouraged the model to produce the unethical behaviour”.
The idea of AI as an existential menace to humanity is not simply overblown, it also hides the real threat AI poses, not in the future but in the present – the result of actions not of machines but of humans."
https://observer.co.uk/news/opinion-and-ideas/article/ai-may-well-pose-a-threat-to-jobs-but-its-the-tech-dystopia-thats-the-real-worry
#AI #GenerativeAI #MassUnemployment #BigTech #TechDystopia