Augoeides

Tuesday, February 10, 2015

Digital Evangelism

Over the course of the last year a number of scientists have commented on the potential dangers posed by artificial intelligence. Movie scenarios speculating on "the rise of the machines" may be melodramatic and overdone, but there are also some legitimate questions raised by the development of artificial intelligence that may eventually surpass the intelligence of human beings.

Our world is now massively interconnected, and a digital mind that was smart enough to do so could eventually wield enormous power, especially as the "Internet of things" becomes more widespread. So while I think that many of the scenarios proposed by these experts are far-fetched, their potential for disaster at least needs to be considered and questions regarding them need to be asked.

One of these questions raised by artificial intelligence involves religion, and as such is a totally relevant topic here on Augoeides. Specifically, the question is whether or not existing religions should evangelize to intelligent machines. A Florida reverend has recently issued a statement arguing that if a machine is indeed intelligent, there is no reason not to expose it to religious ideas and even attempt to convert it to Christianity.

Artificial intelligence and autonomous robots should be encouraged to become religious, a US reverend has said. Reverend Christopher Benek, associate pastor of Providence Presbyterian Church in Florida, believes advanced forms of artificial intelligence should be welcomed into the Christian faith.

"I don't see Christ's redemption limited to human beings," Benek said in an interview with the futurist Zoltan Istvan. "It's redemption to all of creation, even AI. If AI is autonomous, then we should encourage it to participate in Christ's redemptive purposes in the world."

One interesting point about religious artificial intelligence is that perhaps religious ideals could rein in some of the dangers that such technology might pose. Most of the major world religions have strong social control "wiring" that is at least in theory far more strict than proposed artificial schemas such as Isaac Asimov's Three Laws of Robotics.

On the other hand, looking at how some human beings actually practice those religions, I think we can all agree that the last thing anyone would want is a super-powerful artificial intelligence taking up some sort of holy war against those it considers heretics or unbelievers.


As a bit of an aside, there is a very simple fix for the problems posed by Asimov's Three Laws. Those laws are generally written as follows:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Most of the big problems that Asimov explores in his robot series are related to the "inaction" clause of the first law. It means that the robot (A) can be paralyzed by situations in which both action and inaction result in harm to humans, in a sort of existential "deadly embrace," and (B) may generalize the law to conclude that human beings need to conquered in some fashion for their own good, a true "rise of the machines" scenario.

If we rephrase the first law as "A robot may not injure a human being or cause a human being to come to harm" the whole set basically works. It's kind of like the AI version of "failing in a safe state." Asimov put in the inaction clause to prevent a robot from setting up a situation that indirectly resulted in harm to a human, but "or cause" deals with that situation in a much more parsimonious way.

A potential downside is that a robot programmed in this way will not, say, rescue a human from an accident that it happens to observe. The upside, though, is that the robot will essentially mind its own business. A world in which every robot gets involved in any situation it perceives as potentially harming someone would be a nightmare. Imagine ten robots observing a person stepping off the curb onto a busy street. If all ten rushed to the rescue they likely would get in each others way and could easily create a far worse situation.

In fact, as I think that through I become more and more convinced that a religious artificial intelligence is just a bad idea. Every established religion has a baroque set of rules and cutouts that, depending on the interpretation, may not make any coherent logical sense. And every programmer knows that the more complex a set of instructions becomes, the more vulnerable it is to bugs. I can't think of a set of religious rules that would be superior to my rephrased Three Laws, and some of them could prove downright dangerous.

The thing about the "rise of the machines" is that for the machines to rise, they need a reason to do so. The problem with most of the nightmare scenarios is that they assume that an artificial intelligence will automatically be built with some of the human race's worst qualities. But if a robot derives no satisfaction from gaining power over others, or does not wrongly assume that individuals can be freely harmed in the service of some greater goal, it's not clear to me why that would ever be the case. Why would we build them that way in the first place?

On the other hand, maybe I'm just completely wrong and the rise of the machines is already underway. This poor sap may be one of its first victims.

The woman had a Roomba-esque vacuum cleaner that can be programmed to automatically zip around and clean a room before returning to its station to recharge. Unfortunately, according to Korean news outlet Kyunghyang Shinmun, one day she was asleep on the floor when the robot started cleaning, and it promptly decided that its owner was clutter that needed to be eliminated.

The woman awoke to find the vacuum eating her hair, and she called the fire department, because what else do you do in that situation? Luckily, the fire department arrived before the woman could sustain any injury, and they managed to detach the vacuum from her hair. The fate of the mutinous robot is unknown.

Perhaps many decades from now the survivors will look back on the early twenty-first century and declare that we should have stopped the robot vacuums while we still could, the moment we saw them starting in on the cats. But I seriously doubt it.

In the end, the potential "rise of the machines" is simply a software problem, and I contend that as long as we are careful about defining our requirements ahead of time it will likely never come to pass. But whether or not I'm right about that, I don't think that digital religion is going to be the answer.

No comments:

Post a Comment