ALL >> Computers >> View Article
The Fourth Law Of Robotics
Sigmund Freud said that we have an uncanny reaction to the inanimate. This is probably because we know that - pretensions and layers of philosophizing aside - we are nothing but recursive, self aware, introspective,conscious machines. Special machines, no doubt, but machines althesame.
The series of James bond movies constitutes a decades-spanning gallery of human paranoia. Villains change: communists, neo-nazis, media moguls. But one kind of villain is a fixture in this psychodrama, in this parade of human phobias: the machine. James Bond always finds himself confronted with hideous, vicious, malicious machines and automata.
It was precisely to counter this wave of unease, even terror, irrational but all-pervasive, that Isaac Asimov, the late Sci-fi writer (and scientist) invented the Three Laws of Robotics
#A robot may not injure a human being or, through inaction, allow a human being to come to harm
#A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law
#A robot must protect its own existence as long as such protection does not conflict with the First or Second ...
... Laws
Many have noticed the lack of consistency and the virtual inapplicability of these laws when put together. First, they are not derived from any coherent worldview or background. To be properly implemented and to avoid their interpretation in a potentially dangerous manner - the robots in which they are embedded must be equipped with a reasonably full model of the physical and human spheres of existence.
Devoid of such contexts, these laws soon lead to intractable paradoxes (experienced as anervous breakdown by one of Asimov's robots). Conflicts are ruinous in automata based on recursive functions (Turing machines), as all robots are. Godel pointed at one such self destructive paradox in the "Principia Mathematica", ostensibly a comprehensive and self consistent logical system. It was enough to discredit the whole magnificent edifice constructed by Russel and Whitehead over adecade.
Some will argue against this and say that robots need not be automata in the classical, Church-Turing, sense. That they could act according to heuristic, probabilistic rules of decision making. There are many other types of functions (non-recursive) that can be incorporated in a robot, they will say. True, but then, how can one guarantee the robot's fully predictabile behaviour? How can one be certain that the robots will fully and always implement the three laws? Only recursive systems are predictable in principle (at times, their complexity makes it impossible).
This article deals with some commonsense, basic problems immediately discernible uponclose inspection of the Laws. The next article in this series will analyse the Laws from a few vantage points: philosophy, artificial intelligence and some systems theories.
An immediate question springs to mind : HOW will a robot identify a human being? Surely,in a future of perfect androids, constructed of organic materials, no superficial, outer scanning will suffice. Structure and composition will not be sufficient factors of differentiation.
There are two ways to settle this very practical issue: one is to endow the robot with the ability to conduct a Converse Turing Test, the other is to somehow "barcode" all the robots by implanting some signalling device inside them. Both present additional difficulties.
The second solution will prevent the robot from positively identifying humans. He will surely be able identify robots and only robots.
This is ignoring, for discussion's sake, defects in manufacturing or loss of the implanted identification tags. Should the robot get rid of its tag, it will presumably be classified as a "defect in manufacturing". But the robot will be forced to make a binary selection. It will classify one type of physical entities as robots - all the others, he will group into "non-robots". Will non-robots include monkeys and parrots ?
Yes, unless the manufacturers equip the robots with digital or optical or molecular equivalents of the human figures (masculine and feminine) in varying positions (standing, sitting, lying down). But this is a cumbersome solution and not a very effective one: there will always be the odd position which the robot will find hard to locate in its library. A human disk thrower or swimmer may easily be passed over as "non-human" by a robot. So will certain types of amputated invalids.
The first solution is even more seriously flawed. It is possible to design a test, which the robot will apply to distinguish a robot from a human. But it will have to be non-intrusive and devoid of communication or with very limited communication. The alternative is a prolonged teletype session, with the human behind a curtain, after which the robot will issue its verdict: the respondent is a human or a robot. This is ridiculous. Moreover, the application of such a test will make the robot human in many important respects. A human knows other humans for what they are because he is human. A robot will have to be human to recognize another, it takes one to know one, the saying (rightly) goes.
Let us assume that by some miraculous way the problem is overcome and robots unfailingly identify humans. The next question pertains to the notion of "injury" (still in the First Law). Is it limited only to physical injury (the disturbance of the physical continuity of human tissues or of the normal functioning of the human body)? Should it encompass the no less serious mental, verbal and social injuries (after all, they are all known to have physical side effects which are, at times, no less severe than direct physical "injuries")? Is an insult an injury? What about being grossly impolite, or psychologically abusive? Or offending religious sensitivities, being politically incorrect - are these injuries? The bulk of human (and, therefore, inhuman) actions actually offend one human being or another, have the potential to do so, or seem to be doing so. Consider surgery, driving a car, or investing money in the stock exchange. These "innocuous" acts may end in coma, an accident, or a stock exchange crash respectively. Should a robot refuse to obey human instructions which embody a potential to injure said instruction-givers? Consider a mountain climber - should arobot refuse to hand him his equipment lest he falls off the mountain in an unsuccessful bid to reach the peak? Should a robot abstain from obeying human commands pertaining to crossing busy roads or driving sports cars? Which level of risk should trigger the refusal program? At which stage of a collaboration should it be activated? Should a robot refuse to bring a stool to a person who intends to commit suicide by hanging himself (that's an easy one)?
Should he ignore an instruction to push someone off a cliff (definitely), help him climb the cliff (less assuredly so), get to the cliff (maybe so), get to his car in order to drive him to the cliff... Where do the responsibility and obeisance bucks stop?
Whatever the answer, one thing is clear: such a robot must be equipped with more than arudimentary sense of judgement, with the ability to appraise and analyse complex situations, to predict the future and to base his decisions on very fuzzy algorithms (no programmer can foresee all possible circumstances). To me, such a "robot" sounds much more dangerous than any recursive automaton which does NOT include the famous Three Laws.
Moreover, what, exactly, constitutes "inaction"? How can we set apart inaction from failed action or, worse, from an action which failed by design, intentionally? If a human is in danger and the robot tries to save him and fails - how will we be able to determine to what extent it exerted itself and did everything it could do?
How much of the responsibility for the inaction or partial action or failed action should be attributed to the manufacturer - and how much imputed to the robot itself? When a robot decides finally to ignore its own programming - how are we to gain information regarding this momentous event? Outside appearances can hardly be expected to help us distinguish a rebellious robot from a lackadaisical one.
The situation gets much more complicated when we consider states of conflict. Imagine that a robot is obliged to hurt one human in order to prevent him from hurting another. The Laws are absolutely inadequate in this case. The robot should either establish an empirical hierarchy of injuries - or an empirical hierarchy of humans. Should we, as humans, rely on robots or on their manufacturers (however wise, moral and compassionate) to make this selection for us? Should we abide by their judgement - which injury is the more serious and warrants an intervention?
A summary of the Asimov Laws would give us the following "truth table"
A robot must obey human orders with the following two exceptions
a) That obeying them will cause injury to a human through an action or
b) That obeying them will let a human be injured
A robot must protect its own existence with three exceptions
a) That such protection will be injurious to a human
b) That such protection entails inaction in the face of potential injury to a human
c) That such protection will bring about robot insubordination (not obeying human instructions).
Here is an exercise:
Imagine a situation (consider the example below or one you make up) and then create a truth table based on these five conditions. In such a truth table, "T" would stand for "compliance" and "F" for non-compliance with a rule. There is no better way to demonstrate the problematic nature of Asimov's idealized yet highly impractical world.
Examples of a situation:
A radioactivity monitoring robot malfunctions. If it self-destructs, his human operator might be injured. If it does not, his malfunction will EQUALLY SERIOUSLY injure a patient dependent on his performance.
One of the possible solutions is, of course, to introduce gradations, a probability calculus, or a utility calculus. As they are phrased by Asimov, the rules and conditions are of a threshold, yes or no, take it or leave it nature. But if the robots were instructed to maximize overall utility, many borderline cases would have been resolved. Still, even the introduction of heuristics, probability, and utility would not have helped resolve the above dilemma. Life is about inventing new rules on the fly, as we go, and as we encounter new challenges in a kaleidoscopically metamorphosing world. Robots with rigid instruction sets are ill suited to cope with that.
Here is an exercise:
Imagine a situation (consider the example below or one you make up) and then create a truth table based on these five conditions. In such a truth table, "T" would stand for "compliance" and "F" for non-compliance with a rule. There is no better way to demonstrate the problematic nature of Asimov's idealized yet highly impractical world.
Examples of a situation:
A radioactivity monitoring robot malfunctions. If it self-destructs, his human operator might be injured. If it does not, his malfunction will EQUALLY SERIOUSLY injure a patient dependent on his performance.
One of the possible solutions is, of course, to introduce gradations, a probability calculus, or a utility calculus. As they are phrased by Asimov, the rules and conditions are of a threshold, yes or no, take it or leave it nature. But if the robots were instructed to maximize overall utility, many borderline cases would have been resolved. Still, even the introduction of heuristics, probability, and utility would not have helped resolve the above dilemma. Life is about inventing new rules on the fly, as we go, and as we encounter new challenges in a kaleidoscopically metamorphosing world. Robots with rigid instruction sets are ill suited to cope with that.
Sam Vaknin is the author of Malignant Self Love - Narcissism Revisited and After the Rain - How the West Lost the East. He is a columnist for Central Europe Review, United Press International (UPI) and eBookWeb and the editor of mental health and Central East Europe categories in The Open Directory, Suite101 and searcheurope.com.
Visit Sam's Web site at http://samvak.tripod.com
Add Comment
Computers Articles
1. Exploring How Ai In The Cloud Can Transform Your BusinessAuthor: TechDogs
2. The Power Of Cloud And Ai: A New Era Of Collaboration
Author: TechDogs
3. Get Business Insights Using Expedia & Booking. Com Review Data Scraping
Author: DataZivot
4. Top 10 Reasons A Strong Communication Strategy Drives Prm Program Success
Author: Archi
5. Achieve Scalable Web Scraping With Aws Lambda
Author: Devil Brown
6. Overcoming Common Challenges In Iso 27001 Implementation
Author: Jenna Miller
7. Basic Computer Course: Your Gateway To Skill Development | The Institute Of Professional Accountants
Author: Tipa Institute
8. Top 7 Advantages Of React Js
Author: Bella Stone
9. Top 7 App Marketing Tools For Mobile Success
Author: Bella Stone
10. Revolutionizing Education Management With Samphire It Solution Pvt Ltd’s Erp Software
Author: CONTENT EDITOR FOR SAMPHIRE IT SOLUTIONS PVT LTD
11. Top 10 Healthcare Technology Trends
Author: goodcoders
12. "building Tomorrow’s Factories: The Role Of Automation & Robotics In Modern Manufacturing"
Author: andrew smith
13. The Ultimate Guide To The Best Ecommerce Plugin For Wordpress
Author: Rocket Press
14. Xsosys Erp: A Scalable Solution For Businesses In Any Industry
Author: Xsosys Technology(S) Pte. Ltd.
15. Rental Management Software: A Complete Solution For Car, Property, And Coworking Space
Author: RentAAA