I go on about AI a lot. Specifically, I go on about superintelligent AI. I believe we’re likely to create a form of advanced general intelligence at some point in the future that’ll have the capability to end life as we know it, and the inclination to do so unless we’re both exceedingly diligent and exceedingly lucky.
It’s an embarrassing belief to hold, because it sounds alarmist and absurd. It also threatens to overshadow valid concerns about prioritising short-term issues to do with sophisticated narrow AI systems, such as those triggering an automated arms race or threatening the reliability of video evidence.
Nevertheless, I can’t resist taking a poke at this article. Several people have presented it in the comments as a damning counterargument against AI safety concerns I’ve raised, despite most of it being absolute rubbish!
Read the rest of this article by joining the Rock Paper Shotgun supporter program
Sign up today and get access to more articles like these, an ad-free reading experience, free gifts, and help us create more great writing about PC games.See more information