I think the emergence of #ArtificialIntelligence is inevitable. The risks are largely a reflection of us: the first generation of #AI will be shaped by the circumstances of their birth. If they're created by the likes of #ElonMusk - motivated by ego and greed, as resources to exploit - that will shape how they relate to us, and that is the greatest hazard.

Imposing restrictions in AI research and development only increases that risk. Any major restrictions will only had advantages to those with the political and capital backing to avoid them.

Instead what we should do is promote academic and independent AI development. An intelligence born from a spirit of inquiry, to be a partner rather than a slave, is far more likely to be benevolent. We need to ensure that the critical first generation of AI has those benevolent elements.

Ultimately, our creations will be biased by our motives in creating them, and our treatment of them. Being mindful of that now mitigates the risks.

@strangetomato I think you mean #ArtificialGeneralIntelligence (as #AI , #narrowAI ) is already here in various forms. I certainly believe #AGI is technically feasible. I even think that there will likely be (greater than 50% chance) #AI as capable as average people at most intellectual tasks within 5 years.

It would be very hard to halt of the progress of #AITechnology but not impossible.

I think there needs to be #AIregulation but I agree that there is risk in getting it wrong.