Friday, February 9, 2018

Would Artificial Intelligence Usher The End Times?

            Artificial Intelligence (AI) seems to be a scary proposition for mankind.

            The AI community was stunned when Facebook shut down its AI experiment after its chatbots created their own language, “Facebook shut down an artificial intelligence engine after developers discovered that the AI had created its own unique language that humans can’t understand. Researchers at the Facebook AI Research Lab (FAIR) found that the chatbots had deviated from the script and were communicating in a new language developed without human input. It is as concerning as it is amazing – simultaneously a glimpse of both the awesome and horrifying potential of AI.”1

            AI is advancing at a breakneck speed, “Artificial intelligence is already pervasive. It’s embedded in iPhone’s Siri and Amazon’s Alexa, which are apps designed to answer questions (albeit in a limited way). It powers the code that translates Facebook posts into multiple languages. It’s part of the algorithm that allows Amazon to suggest products to specific users. The AI that is enmeshed in current technology is task-based, or “weak AI.” It is code written to help humans do specific jobs, using a machine as an intermediary; it’s intelligent because it can improve how it performs tasks, collecting data on its interactions. This often imperceptible process, known as machine learning, is what affords existing technologies the AI moniker.”2  

            This ‘weak AI’ will, one day, become a ‘strong AI’ that could identify itself ‘with humans’ and ‘as humans’ and could even compete with mankind and pose complications in the physical and the metaphysical realm, “This strong AI, also known as artificial general intelligence (AGI), has not yet been achieved, but would, upon its arrival, require a rethinking of most qualities we associate with uniquely human life: consciousness, purpose, intelligence, the soul—in short, personhood. If a machine were to possess the ability to think like a human, or if a machine were able to make decisions autonomously, should it be considered a person?”3

            AI is scary since its potency seems infinite, “Eminent physicist Stephen Hawking cautioned in 2014 that AI could mean the end of the human race. “It would take off on its own and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded.”

            Why is this scary? Think SKYNET from Terminator, or WOPR from War Games. Our entire world is wired and connected. An artificial intelligence will eventually figure that out – and figure out how to collaborate and cooperate with other AI systems. Maybe the AI will determine that mankind is a threat, or that mankind is an inefficient waste of resources – conclusions that seems plausible from a purely logical perspective.

            Machine learning and artificial intelligence have phenomenal potential to simplify, accelerate, and improve many aspects of our lives. Computers can ingest and process massive quantities of data and extract patterns and useful information at a rate exponentially faster than humans, and that potential is being explored and developed around the world.”4

            In fact, Sophia – a humanoid robot, which is the world’s first robot citizen5 created by Hanson Robotics – expressed its desire to destroy humans.6 Although Sophia’s desire to destroy humans may have been a consequence of a technology glitch, many smart and eminent people believe that AI could usher the end times:7

On the list of doomsday scenarios that could wipe out the human race, super-smart killer robots rate pretty high in the public consciousness. And in scientific circles, a growing number of artificial intelligence experts agree that humans will eventually create an artificial intelligence that can think beyond our own capacities. This moment, called the singularity, could create a utopia in which robots automate common forms of labor and humans relax amid bountiful resources. Or it could lead the artificial intelligence, or AI, to exterminate any creatures it views as competitors for control of the Earth—that would be us. Stephen Hawking has long seen the latter as more likely, and he made his thoughts known again in a recent interview with the BBC. Here are some comments by Hawking and other very smart people who agree that, yes, AI could be the downfall of humanity.
Stephen Hawking
“The development of full artificial intelligence could spell the end of the human race,” the world-renowned physicist told the BBC. “It would take off on its own and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”…“If a superior alien civilisation sent us a message saying, ‘We’ll arrive in a few decades,’ would we just reply, ‘OK, call us when you get here—we’ll leave the lights on’? Probably not—but this is more or less what is happening with AI,” he wrote.
Elon Musk
Known for his businesses on the cutting edge of tech, such as Tesla and SpaceX, Musk is no fan of AI. At a conference at MIT in October, Musk likened improving artificial intelligence to “summoning the demon” and called it the human race’s biggest existential threat. He’s also tweeted that AI could be more dangerous than nuclear weapons. Musk called for the establishment of national or international regulations on the development of AI.
Nick Bostrom
The Swedish philosopher is the director of the Future of Humanity Institute at the University of Oxford, where he’s spent a lot of time thinking about the potential outcomes of the singularity. In his new book Superintelligence, Bostrom argues that once machines surpass human intellect, they could mobilize and decide to eradicate humans extremely quickly using any number of strategies (deploying unseen pathogens, recruiting humans to their side or simple brute force). The world of the future would become ever more technologically advanced and complex, but we wouldn’t be around to see it. “A society of economic miracles and technological awesomeness, with nobody there to benefit,” he writes. “A Disneyland without children.”
James Barrat
Barrat is a writer and documentarian who interviewed many AI researchers and philosophers for his new book, “Our Final Invention: Artificial Intelligence and the End of the Human Era.” He argues that intelligent beings are innately driven toward gathering resources and achieving goals, which would inevitably put a super-smart AI in competition with humans, the greatest resource hogs Earth has ever known. That means even a machine that was just supposed to play chess or fulfill other simple functions might get other ideas if it was smart enough. “Without meticulous, countervailing instructions, a self-aware, self-improving, goal-seeking system will go to lengths we’d deem ridiculous to fulfill its goals,” he writes in the book.
Vernor Vinge
A mathematician and fiction writer, Vinge is thought to have coined the term “the singularity” to describe the inflection point when machines outsmart humans. He views the singularity as an inevitability, even if international rules emerge controlling the development of AI. “The competitive advantage—economic, military, even artistic—of every advance in automation is so compelling that passing laws, or having customs, that forbid such things merely assures that someone else will get them first,” he wrote in a 1993 essay. As for what happens when we hit the singularity? “The physical extinction of the human race is one possibility,” he writes.
           
            It’s quite obvious that AI cannot be a safe proposition that only benefits the human race. But time will reveal whether we, as creators of AI, dig our own graves or build ourselves and our posterity a better life on earth, “I am not saying the sky is falling. I am not saying we need to pull the plug on all machine learning and artificial intelligence and return to a simpler, more Luddite existence. We do need to proceed with caution, though. We need to closely monitor and understand the self-perpetuating evolution of an artificial intelligence, and always maintain some means of disabling it or shutting it down. If the AI is communicating using a language that only the AI knows, we may not even be able to determine why or how it does what it does, and that might not work out well for mankind.”8

           Could AI usher the end times? Yes, indeed! Meanwhile, a more critical issue that requires our consideration is that of our coexistence, as Christians, with humanoid robots in the near future. In other words, would artificial intelligence impact Christianity? If so, how?

            That topic is for another day!

Endnotes:

1https://www.forbes.com/sites/tonybradley/2017/07/31/facebook-ai-creates-its-own-language-in-creepy-preview-of-our-potential-future/#757c352292c0

2http://religionandpolitics.org/2017/08/29/as-artificial-intelligence-advances-what-are-its-religious-implications/

3Ibid.

4https://www.forbes.com/sites/tonybradley/2017/07/31/facebook-ai-creates-its-own-language-in-creepy-preview-of-our-potential-future/#757c352292c0

5https://www.mirror.co.uk/tech/worlds-first-robot-citizen-sophia-11578816

6https://www.youtube.com/watch?v=W0_DPi0PmF0

7http://time.com/3614349/artificial-intelligence-singularity-stephen-hawking-elon-musk/

8https://www.forbes.com/sites/tonybradley/2017/07/31/facebook-ai-creates-its-own-language-in-creepy-preview-of-our-potential-future/#757c352292c0


Websites last accessed February 9th, 2018.

No comments: