{"id":1576,"date":"2024-08-24T12:36:52","date_gmt":"2024-08-24T12:36:52","guid":{"rendered":"https:\/\/ideaplus.ai\/?p=1576"},"modified":"2024-09-06T07:38:37","modified_gmt":"2024-09-06T07:38:37","slug":"never-summon-a-power-you-cant-control-yuval-noah-harari-on-how-ai-could-threaten-democracy-and-divide-the-world","status":"publish","type":"post","link":"https:\/\/ideaplus.ai\/index.php\/2024\/08\/24\/never-summon-a-power-you-cant-control-yuval-noah-harari-on-how-ai-could-threaten-democracy-and-divide-the-world\/","title":{"rendered":"\u2018Never summon a power you can\u2019t control\u2019 &#8211; Yuval Noah Harari on how AI could threaten democracy and divide the world"},"content":{"rendered":"<p>This is a <a href=\"https:\/\/www.theguardian.com\/technology\/article\/2024\/aug\/24\/yuval-noah-harari-ai-book-extract-nexus\" target=\"_blank\" rel=\"noopener\">fascinating, long, in-depth article<\/a> which is worth making into a PDF (and\/or buying the book) and reading several times to absorb the many interesting points made by Mr Harari.<\/p>\n<p>Here are some excerpts that might not leave you alone:<\/p>\n<p>AI is an unprecedented threat to humanity because it is the first technology in history that can make decisions and create new ideas by itself. All previous human inventions have empowered humans, because no matter how powerful the new tool was, the decisions about its usage remained in our hands. Nuclear bombs do not themselves decide whom to kill, nor can they improve themselves or invent even more powerful bombs. In contrast, autonomous drones can decide by themselves who to kill, and AIs can create novel bomb designs, unprecedented military strategies and better AIs. AI isn\u2019t a tool \u2013 it\u2019s an agent. The biggest threat of AI is that we are summoning to Earth countless new powerful agents that are potentially more intelligent and imaginative than us, and that we don\u2019t fully understand or control.<\/p>\n<p>AI and automation&#8230; pose a particular challenge to poorer developing countries. In an AI-driven global economy, the digital leaders claim the bulk of the gains and could use their wealth to retrain their workforce and profit even more. Meanwhile, the value of unskilled labourers in left-behind countries will decline, causing them to fall even further behind. The result might be lots of new jobs and immense wealth in San Francisco and Shanghai, while many other parts of the world face economic ruin. According to the global accounting firm PricewaterhouseCoopers, AI is expected to add $15.7tn (\u00a312.3tn) to the global economy by 2030. But if current trends continue, it is projected that China and North America \u2013 the two leading AI superpowers \u2013 will together take home 70% of that money.<\/p>\n<p>AI is an unprecedented threat to humanity because it is the first technology in history that can make decisions and create new ideas by itself. All previous human inventions have empowered humans, because no matter how powerful the new tool was, the decisions about its usage remained in our hands. Nuclear bombs do not themselves decide whom to kill, nor can they improve themselves or invent even more powerful bombs. In contrast, autonomous drones can decide by themselves who to kill, and AIs can create novel bomb designs, unprecedented military strategies and better AIs. AI isn\u2019t a tool \u2013 it\u2019s an agent. The biggest threat of AI is that we are summoning to Earth countless new powerful agents that are potentially more intelligent and imaginative than us, and that we don\u2019t fully understand or control.<\/p>\n<p>Mustafa Suleyman is a world expert on the subject of AI. He is the co-founder and former head of DeepMind, one of the world\u2019s most important AI enterprises, responsible for developing the AlphaGo program, among other achievements. AlphaGo was designed to play Go, a strategy board game in which two players try to defeat each other by surrounding and capturing territory. Invented in ancient China, the game is far more complex than chess. Consequently, even after computers defeated human world chess champions, experts still believed that computers would never better humanity in Go.<\/p>\n<p>That\u2019s why both Go professionals and computer experts were stunned in March 2016 when AlphaGo defeated the South Korean Go champion Lee Sedol. In his 2023 book The Coming Wave, Suleyman describes one of the most important moments in their match \u2013 a moment that redefined AI and is recognised in many academic and governmental circles as a crucial turning point in history. It happened during the second game in the match, on 10 March 2016.<\/p>\n<p>\u201cThen \u2026 came move number 37,\u201d writes Suleyman. \u201cIt made no sense. AlphaGo had apparently blown it, blindly following an apparently losing strategy no professional player would ever pursue. The live match commentators, both professionals of the highest ranking, said it was a \u2018very strange move\u2019 and thought it was \u2018a mistake\u2019. It was so unusual that Sedol took 15 minutes to respond and even got up from the board to take a walk. As we watched from our control room, the tension was unreal. Yet as the endgame approached, that \u2018mistaken\u2019 move proved pivotal. AlphaGo won again. Go strategy was being rewritten before our eyes. Our AI had uncovered ideas that hadn\u2019t occurred to the most brilliant players in thousands of years.\u201d<\/p>\n<p><b>Feel free to comment!<\/b><\/p>\n<p><b>&lt;<a href=\"https:\/\/www.theguardian.com\/technology\/article\/2024\/aug\/24\/yuval-noah-harari-ai-book-extract-nexus\" target=\"_blank\" rel=\"noopener\">The Guardian, 24th August 2024<\/a><\/b><\/p>\n","protected":false},"excerpt":{"rendered":"<p>This is a fascinating, long, in-depth article which is worth making into a PDF (and\/or buying the book) and reading several times to absorb the many interesting points made by Mr Harari. Here are some excerpts that might not leave you alone: AI is an unprecedented threat to humanity because it is the first technology [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[73],"tags":[61,62],"class_list":["post-1576","post","type-post","status-publish","format-standard","hentry","category-threat","tag-dangerous","tag-writing"],"_links":{"self":[{"href":"https:\/\/ideaplus.ai\/index.php\/wp-json\/wp\/v2\/posts\/1576","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ideaplus.ai\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/ideaplus.ai\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/ideaplus.ai\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/ideaplus.ai\/index.php\/wp-json\/wp\/v2\/comments?post=1576"}],"version-history":[{"count":6,"href":"https:\/\/ideaplus.ai\/index.php\/wp-json\/wp\/v2\/posts\/1576\/revisions"}],"predecessor-version":[{"id":1618,"href":"https:\/\/ideaplus.ai\/index.php\/wp-json\/wp\/v2\/posts\/1576\/revisions\/1618"}],"wp:attachment":[{"href":"https:\/\/ideaplus.ai\/index.php\/wp-json\/wp\/v2\/media?parent=1576"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/ideaplus.ai\/index.php\/wp-json\/wp\/v2\/categories?post=1576"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/ideaplus.ai\/index.php\/wp-json\/wp\/v2\/tags?post=1576"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}