Remember when theoretical physicist Stephen Hawking teamed up with other illustrious minds to sign an open letter warning about the potential dangers of Artificial Intelligence? No? Well, he did, and he also agreed to do an AMA (Ask Me Anything) where everyone could pitch in with their own questions and opinions about the future – not necessarily about AI in particular – and now the answers are in.
Hawking is still bent on making his own artificial voice heard about the perils of AI, so he focused on answering the questions that touched this topic in particular. Some highlights:
- If AI becomes better than humans at AI design, expect machines whose intelligence far surpasses ours – even more than our intelligence surpasses that of a snail
- AI is not evil, as portrayed in the media. Rather, it can end up doing things that aren’t aligned with our wants and needs, solely because it will find its own mission more important
- There is no easily estimable time-frame for when full-blown AI will kick in, so it’s important to ensure from an early time that it will be a beneficial intelligence. In typical Hawking style, he said “It might take decades to figure out how to do this, so let’s start researching this today rather than the night before the first strong AI is switched on.”
- Designed or not, an AI is likely to eventually develop a drive to survive and acquire more resources as a step toward accomplishing whatever goal it has, which could pose problems for humans
- Since technology drives ever-increasing inequality, it’s reasonable to assume that the AI owners of the future will lobby against wealth redistribution
- To Hawking, women are the most intriguing mystery and one that should remain just that
Full read over on reddit.
Post A Reply