
Can we build AI without losing control over it?
Scared of superintelligent AI? You should be, says neuroscientist and philosopher Sam Harris -- and not just in some theoretical way. We're going to build superhuman machines, says Harris, but we haven't yet grappled with the politics and ethics of how best to contain and manage them.
Summary
Sam Harris's TED talk 'Can we build AI without losing control over it?' delves into the ethical challenges of developing superintelligent AI, emphasizing the risks of losing control and the need for alignment with human values. This directly ties into the trending topic of The Ethical Frontiers of Artificial Intelligence, highlighting concerns over safety, unintended consequences, and moral implications as AI advances rapidly.
"We have to admit that we're in the process of building some kind of god. Now would be a good time to make sure it's a god we can live with."
Discuss: What steps should society take to ensure AI remains under human control while exploring its ethical boundaries?


























