I go on about AI a lot. Specifically, I go on about superintelligent AI. I believe we’re likely to create a form of advanced general intelligence at some point in the future that’ll have the capability to end life as we know it, and the inclination to do so unless we’re both exceedingly diligent and exceedingly lucky.
It’s an embarrassing belief to hold, because it sounds alarmist and absurd. It also threatens to overshadow valid concerns about prioritising short-term issues to do with sophisticated narrow AI systems, such as those triggering an automated arms race or threatening the reliability of video evidence.
Nevertheless, I can’t resist taking a poke at this article. Several people have presented it in the comments as a damning counterargument against AI safety concerns I’ve raised, despite most of it being absolute rubbish!
To view this article you'll need to have a Premium subscription. Sign up today for access to more supporter-only articles, an ad-free reading experience, free gifts, and game discounts. Your support helps us create more great writing about PC games.See more information