However, some people think certain additional details are crucial to include in a depiction of the core threat.Ī common complaint about comparisons to Terminator (and other popular rogue AI stories) is that it involves the AI being motivated by a spontaneous hatred of humanity, as opposed to targeting humanity for purely instrumental reasons. I think the two step argument I gave for AI risk-AI may someday be more powerful than us, and may not share our goals-is a totally adequate high-level summary of the case for taking AI risk seriously, especially for a field rife with differing views. AI risk is like Terminator! AI might get real smart, and decide to kill us all! We need to do something about it! An Invalid Objection: What about Instrumental Convergence? Toby goes on to say he’s not optimistic about the potential to apply the successes of asteroid preparedness to other catastrophic risks, but that’s hardly a reason to actively undermine ourselves. The threat of AI to humanity is one of the most common plots across all pop culture, and yet advocates for its real-world counterpart seem allergic to utilizing this momentum to promote concern for the real thing. So it’s a real success story in navigating the political scene and getting the buy-in. And then that coincided with getting the support and it stayed bipartisan and then they have fulfilled a lot of their mission. And then a couple of films, you might remember, I think “Deep Impact” and “Armageddon” were actually the first asteroid films and they made quite a splash in the public consciousness. Toby Ord: Because they saw one of these things happen, it was in the news, people were thinking about it. Fiction can be a powerful tool for generating public interest in an issue, as Toby Ord describes in the case of asteroid preparedness as part of his appearance on the 80,000 Hours Podcast: This may seem like a trivial matter, but I think it is of some significance. If you are trying to communicate to people why AI risk is a concern, why start off by undermining their totally valid frame of reference for the issue, making them feel stupid, uncertain, and alienated? They are also why AI might in real life bring about an existential catastrophe. These two statements are obviously at least plausible, which is why there are so many popular stories about rogue AI. That artificial intelligence may not necessarily share our goals.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |