An article in The Economist titled Rise of the Machines (May 9th, 2015) reported that “Nick Bostrom, a philosopher at the University of Oxford who helped develop the notion of ‘existential risks’ – those that threaten humanity in general – counts advanced artificial intelligence as one such, alongside giant asteroid strikes and all-out nuclear war.” In a speech at MIT in October 2015, Elon Musk described artificial intelligence (AI) as “summoning the demon.”

Should Artificial Intelligence Be Regulated?

Elon Musk, the billionaire founder of Tesla, SpaceX and PayPal and other high-tech companies has begun to lobby for “the proactive regulation of artificial intelligence because he believes it poses a “fundamental risk to the existence of civilization.”

According to an article in The Guardian (Olivia Solon, July 25th, 2017) “Musk, who has been issuing warnings like these for years now, is concerned that humans will become second-class citizens in a future dominated by artificial intelligence – or that we’ll face a Terminator- style robot uprising.”

A number of organizations are working on creating AI ‘agents’ which communicate with each other to complete tasks. While that doesn’t sound threatening by itself, consider this: an AI agent Facebook created developed its own language which is difficult or even potentially impossible for humans to understand. The researchers there have pulled the plug on the experiment after fears they could lose control of the system.

More Products Are Using Artificial Intelligence

In another Guardian article by Samuel Gibbs (July 17th, 2017), ‘Speaking at the US National Governors Association summer meeting in Providence Rhode Island, Musk said: “Normally the way regulations are set up is when a bunch of bad things happen, there’s a public outcry, and after many years a regulatory agency is set up to regulate that industry.”’ “AI is the rare case where I think we need to be proactive in regulation instead of reactive. Because I think by the time we are reactive in AI regulation, it’ll be too late,” Musk told the meeting.”

Consider some of the products AI is beginning to show up in: vehicles, medical diagnosis, home automation, and controls. Clearly, if AI were to pose a threat, it would be able to take actions that could be potentially life threatening. Can AI reach a level of intelligence where it competes effectively with humans? Way back in 2011, a long time ago given the rate of advances in AI, IBM’s Watson computer crushed two human champions on Jeopardy, so it is plausible. Watch the Jeopardy match here.

[cta_widget block=”cta-eds”]

Can Humans Compete with Artificial Intelligence?

So, how can we humans compete effectively with the AI of the future?

According to Elon Musk, quoted while speaking at the World Government Summit in Dubai humans must become cyborgs if they are to stay relevant in a future dominated by artificial intelligence. “There will be fewer and fewer jobs that a robot can’t do better,” he said at the summit. The basic idea is that people will interact with computers directly through computer-brain interfaces reports Olivia Solon in The Guardian (Feb 15th, 2017), which “cut out sluggish communication middlemen such as typing and talking in favour of direct, lag-free interactions between our brains and external devices.”

By creating “neuro-prosthetics” that allow us to communicate complex ideas telepathically, humans could augment our capabilities. What kind of augmentation is possible?

There are cognitive things like the ability to do advanced math or access vast amounts of memory. There are physical attributes like night or multi-spectrum (think infrared or ultra-violet) vision or enhanced movement capabilities where we could move faster or lift greater amounts of weight.

The movie “Elysium” demonstrates one potential vision of a human cyborg when Matt Damon’s character is fitted with a suit that provides super-human capability controlled via a computer-brain interface. You can see this suit in this video clip from the movie here.

What To Do About Artificial Intelligence

What can you do to avert an apocalyptic future like portrayed in the movies “The Terminator” or “The Matrix”?

You can help shape the future and reduce the risk posed by AI by joining Elon Musk in the effort to lobby our elected representatives to regulate AI. Without legal boundaries, it is conceivable that AI could indeed grow to pose an existential threat.