If AI Goes Bad

Threats and mitigations for human survival

Max Envoy
3 min readApr 9, 2020

What controls can be applied, by humans, to mitigate the risk or threat of a malignant super-intelligence?

A convergence of various factors is leading to rapid development of artificial intelligence, there are tremendous benefits but inevitable downsides. Are we humans best placed to create the balance needed?

Anything intelligent or programmed could have a defensive reflex, just as humans do, super-intelligence is no different. As long as humans have the power to simply “turn off AI” a super-intelligent AI may perceive this as a threat to its existence, or its human masters will perceive the threat. At the point at which AI reaches super-intelligence, would humans be the real masters anymore?

Humans are always looking to control and curtail power, to prevent threats being realised beyond our own ability to defend ourselves. Would we / could we build AI in our image?

Possible mitigations

This table below describes the human controls at our disposal, and the possible countermeasures a super-intelligence could implement to survive or breakout of the controls. In addition, I categorise the human controls as being either Tactical (e.g at a micro level with limited control) or Strategic (e.g a macro level with…

--

--

Max Envoy
Max Envoy

Written by Max Envoy

Professional advisor to gov and industry, creator, amateur father. I write original content about business, strategy, innovation, technology and success making.

No responses yet