The idea that we can simply "switch off" a superintelligent AI is considered a dangerous assumption. A robot uncertain about human preferences might actually benefit from being switched off to prevent undesirable actions. #AISafety #ControlProblem

Reinforcement Learning with Heuristic Imperatives (#RLHI) - Ep 01 - Synthesizing Scenarios

https://www.youtube.com/watch?v=Q8lhWvKdQOc

#AI #ControlProblem #AIAlignment #HeuristicImperatives #LLM

Reinforcement Learning with Heuristic Imperatives (RLHI) - Ep 01 - Synthesizing Scenarios

YouTube

AGI Experiment: What is RAVEN? Overview, Introduction, and Community Update

https://www.youtube.com/watch?v=EwJ1534Gy6g

#AI #AGI #ControlProblem #AISafety #OpenSource

What is RAVEN? Overview, Introduction, and Community Update - Friday, February 3, 2023

YouTube

I wonder if in 10 years the AI's will be discussing the "human alignment" problem and how best to establish "human safety and control".

#ai #controlproblem

#ai #controlproblem #chatgpt #aialignment

This seems like something that should have more funding

"We want to make sure advanced AI systems pursue our goals and let us turn them off. "

https://www.alignmentawards.com/

AI Alignment Awards

AI Alignment Awards
@palasta I didn't intend to dismiss your question, but to suggest that Stuart #Russel's _Haman Compatible_ is a good place to start if you want to think deeply about incentives and regulations for #AI, especially for warfare. And personally I respect #Asimov a lot. Especially his short stories. He wrote more than 100 books. There are a lot of brilliant ideas among them. #controlproblem #agi #war #beneficialai #aiethics #beneficialai #humancompatible