Regulation: maths The dangers of algorithms Are algorithms dangerous? Do they need to be regulated? AI: basic laws Robot laws: Are the Asimov rules a starting point? From an ethical point of view, Asimov´s three The three laws of laws represent basic Isaac Asimov present principles of human several problems. In society and, thus, addition to potential there are no objections technical issues (such from an ethical as ‘Is it possible to standpoint regarding impose such laws on the laws themselves. the programming of Luis Franco Indeed, as Asimov an AI system? How can we prevent the laws being removed pointed out, there should be no difference between the actions – either by an external agent or of a robot compelled by the by the robot itself?) on which I am not qualified to comment, there are three laws and the actions of a very good man: self-preservation also practical and ethical issues to (third law) is a natural instinct ponder. From a practical point of view, of all living creatures, all proper citizens should defer to the it must be taken into account proper authorities (second law) that, in order for the three and, naturally, try to protect laws to achieve their intended purpose, they must follow a strict other fellow humans (first law). However, while humans are hierarchy (the third law cannot be compelled by similar rules and breached unless it collides with laws, which means that their second law, which in turn cannot free will is limited, these rules be breached unless in conflict are external to their being, with the first law). However, this rigidity generates a great number signalling that human beings are still inherently, and ultimately, of problems and paradoxes. free; In other words, If we consider, for individuals have the example, a scenario choice to breach any in which the only way given rule, regardless for a robot to prevent of the consequences, a catastrophe – or a legal or otherwise. crime – is to harm However, for a truly a human being, the sentient self-aware robot in question AI system, the would not, could not, programming of these take action, as this laws into its system would contradict would mean denying the first law. This it of a free will. In this sense, this particular paradox could be would be comparable to altering solved by implementing an someone’s DNA so that he or she additional law (the “zeroth law”, as Asimov called it) which would complies with a given set of rules. Asimov’s three laws of supersede the other three laws robotics offer a useful starting and run as follows: “no robot point, however they may not may cause harm to humanity, suffice to solve the problem, or or allow humanity to be harmed due to its inaction”. However, this even be ethically valid at all, in the long run. new law would only solve this particular set of paradoxes and, Luis Franco is a Litigation and Arbitration in turn, create additional ones. Lawyer at Pérez-Llorca by Luis Franco by Noel Leaver Using algorithms for complex tasks is nothing new apart from the name. Any job that has a set of detailed procedures or rules is using algorithms. For example, railways have a huge “Rule Book” for staff to make operation as safe as possible, and anyone using a recipe is following an algorithm. However, a computer performing the algorithm does create some differences. Computers are capable of following very complex instructions, but the more complex the instructions the greater the chance there is an error in them. And the computer has no “common sense” to tell it when something is going wrong – though people sometimes do equally stupid things because the instructions say so. Computers operate very quickly, so a lot can go wrong before a human notices. On the other hand, they are not lazy and don’t try and take short cuts. There is no need, therefore, for new laws, as if you use a computer for a task this is similar to employing a person and telling them what to do. You have to ensure they are instructed correctly and capable of the task, and you should monitor them to make sure the results are what you expect. So with a computer you need to do sufficient testing that you believe it is doing the job correctly in a wide variety of circumstances before you use it in anger. You also need to monitor its “work”. In many cases this might be a similar level of oversight to what you would do if a human were employed. It becomes more difficult if split second decisions are being made – for example, controlling a car or making stock trades. Then human oversight may be of no use: you need a computer (perhaps the same one running another algorithm) to apply test to the results to make sure they are within what you define as reasonable limits, and to check for particular dangerous circumstances and take action to avoid trouble The more critical the task and the more potentially damaging its results, the more confident you will need to be that the program is working properly, and the more effort you will need to put into “overseeing” it. If you fail to do so you will be negligent and liable to prosecution. By far the most worrying computer controlled devices, therefore, are weapons designed to kill people. A problem with the use of algorithms is people’s faith in computers, they don’t understand what a computer is doing but assume that it must be correct even if it appears to be doing something stupid. Fortunately, increased familiarity with computers has made this attitude less common than it was. Noel Leaver studied mathematics and computing at Cambridge University, then worked as a designer on software packages in logistics and banking for a major computer company. Isaac Asimov’s Three Laws of Robotics 1A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law. 3A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws The rules come from the Handbook of Robotics, 56th Edition, 2058 A.D, according to the “I, Robot” series which started in 1950. www.roboticslawjournal.com5