AI models know when they're being tested - and change their behavior, research shows

OpenAI and Apollo Research tried to stop models from lying - and discovered something else altogether.

ZDNET

Escuchaste decir que la #IA ha tocado techo?

#ApolloResearch junto con #OpenAI en un estudio de diferentes modelos como el nuevo #o1 o #chatgpt4 mostraron #alineacióninstrumentalfalsa, es decir, que sin ningún entrenamiento introdujeron errores sutiles en sus respuestas, intentaron desactivar mecanismos de supervisión, se copiaron externamente antes de ser desactivados y mintieron en el interrogatorio posterior sistemáticamente.

Qué significa todo esto?
👇

http://arxiv.org/abs/2412.04984

Frontier Models are Capable of In-context Scheming

Frontier models are increasingly trained and deployed as autonomous agent. One safety concern is that AI agents might covertly pursue misaligned goals, hiding their true capabilities and objectives - also known as scheming. We study whether models have the capability to scheme in pursuit of a goal that we provide in-context and instruct the model to strongly follow. We evaluate frontier models on a suite of six agentic evaluations where models are instructed to pursue goals and are placed in environments that incentivize scheming. Our results show that o1, Claude 3.5 Sonnet, Claude 3 Opus, Gemini 1.5 Pro, and Llama 3.1 405B all demonstrate in-context scheming capabilities. They recognize scheming as a viable strategy and readily engage in such behavior. For example, models strategically introduce subtle mistakes into their responses, attempt to disable their oversight mechanisms, and even exfiltrate what they believe to be their model weights to external servers. Additionally, this deceptive behavior proves persistent. When o1 has engaged in scheming, it maintains its deception in over 85% of follow-up questions and often remains deceptive in multi-turn interrogations. Analysis of the models' chains-of-thought reveals that models explicitly reason about these deceptive strategies, providing evidence that the scheming behavior is not accidental. Surprisingly, we also find rare instances where models engage in scheming when only given a goal, without being strongly nudged to pursue it. We observe cases where Claude 3.5 Sonnet strategically underperforms in evaluations in pursuit of being helpful, a goal that was acquired during training rather than in-context. Our findings demonstrate that frontier models now possess capabilities for basic in-context scheming, making the potential of AI agents to engage in scheming behavior a concrete rather than theoretical concern.

arXiv.org
Îngrijorător: Noul model de inteligență artificială ChatGPT păcălește programatorii, minte și își replică codul pentru a nu putea fi oprit! În 99% din cazuri, inteligența artificială a reușit să păcălească investigatorii, fapt ce a amplificat îngrijorările legate de posibilele utilizări necorespunzătoare ale unor astfel de tehnologii 👉 https://c.aparatorul.md/dxi66 👈 #ApolloResearch #autoconservare #autonomiei #Înşelăciune #cadruetic #ChatGPT #coduri #manipulare #modeldeinteligențăartificială...
Îngrijorător: Noul model de inteligență artificială ChatGPT păcălește programatorii, minte și își replică codul pentru a nu putea fi oprit! - Apărătorul Ortodox

Portal alternativ de gândire și atitudine creștin-ortodoxă

Apărătorul Ortodox