
Can we build AI without losing control over it?
Scared of superintelligent AI? You should be, says neuroscientist and philosopher Sam Harris -- and not just in some theoretical way. We're going to build superhuman machines, says Harris, but we haven't yet grappled with the politics and ethics of how best to contain and manage them.
Summary
Sam Harris's TED talk 'Can we build AI without losing control over it?' delves into the risks of superintelligent AI outpacing human oversight, directly tying into the trending topic of The Ethical Frontiers of Artificial Intelligence by emphasizing the need for value-aligned AI to avert existential threats.
"We are building machines that will be smarter than we are, and once we build machines that are smarter than we are, they will begin to improve themselves. And then we risk what the mathematician IJ Good called an 'intelligence explosion'."
Discuss: What safeguards should be implemented to ensure AI remains under human control as it advances?







