
Can we build AI without losing control over it?
Scared of superintelligent AI? You should be, says neuroscientist and philosopher Sam Harris -- and not just in some theoretical way. We're going to build superhuman machines, says Harris, but we haven't yet grappled with the politics and ethics of how best to contain and manage them.
Summary
Sam Harris's TED talk delves into the risks of superintelligent AI outpacing human control, highlighting ethical dilemmas like unintended consequences and value alignment, which tie directly into the trending topic of AI's ethical implications by urging proactive safeguards to prevent existential threats.
"The development of full artificial intelligence could spell the end of the human race."
Discuss: What steps should society take to align AI goals with human values and prevent loss of control?






