
Can we build AI without losing control over it?
Scared of superintelligent AI? You should be, says neuroscientist and philosopher Sam Harris -- and not just in some theoretical way. We're going to build superhuman machines, says Harris, but we haven't yet grappled with the politics and ethics of how best to contain and manage them.
Summary
Sam Harris's TED talk 'Can we build AI without losing control over it?' delves into the ethical frontiers of artificial intelligence by warning about the risks of superintelligent AI surpassing human control, emphasizing the need for alignment with human values to prevent existential threats.
"The concern is that we will one day build a machine that is so much more competent than we are that there will be no reason to suppose it will remain docile in the face of conflicting goals."
Discuss: How can we ensure that AI development prioritizes ethical considerations to maintain human oversight?













