“Without a principled and experimentally verified understanding of consciousness, we’ll be unable to say for sure when a machine has—or doesn’t have—it. In this foggy situation, artificial consciousness may even arise accidentally.” @anilkseth in #NautilusMagazine https://nautil.us/why-conscious-ai-is-a-bad-bad-idea-302937/
@gmusser @anilkseth
Dr. Seth’s points are interesting and valid, but the article didn’t include what might be a more important element. A person can be very intelligent and conscious, and still be very dangerous to other people. That situation might be considered an ethical one, but I think it’s more a matter of basic motivation and purpose. An AI can be very dangerous even if it is not conscious depending on the owner’s motivation, but an AI with a purpose to help people without doing harm could be conscious without being dangerous. Azimov taught us this basic principle 73 years ago - he published 'I, Robot' the same year Turing published his 'Imitation Game' paper.