top of page

Artificial Intelligence,  why there is a pressing need for adequate national (and international) leg

  • Writer: Salvatore Scevola
    Salvatore Scevola
  • Apr 28, 2017
  • 13 min read

By Salvatore Scevola

Introduction

Technology has assisted humanity from the time of the development of the first stone tools to help our early ancestors better skin their prey, right through to our ability to observe our world including the universe and beyond via the International Space Station. Every advancement has been necessitated by our need to find new ways to use tools and machines that assist humanity, not replace it.

This paper will unpack some of the major ethical/legal concerns many have expressed about the advancements in technology that have propelled humanity, particularly robotics with artificial intelligence (AI), down a path [it seems] seeking to do precisely that; replacing humanity. I will rebut and critique a number of successful [young] tech savvy entrepreneurs who are trying to convince the public:

“Policymakers should stop sticking their heads in the sand and ignoring the fact that "there are going to be a massive amount of jobs destroyed" from the digital revolution, says Mike Cannon-Brookes, co-founder of Australia's most successful tech company, Atlassian.”[1]

Cannon-brookes, Elon Musk, even Bill Gates speak the same language: technology, technology technology, maybe something about tax, and virtually zero discussion about the ultimate driver of this ‘technology’ = profit. The profit bit is only spoken about in reference to the fact that these people have virtually become ‘billionaires overnight’ and this is something to have young people ‘emulate’? Never mind the fact that we have not yet fully mastered human intelligence but we ought to address the millions of jobs that will be lost ‘to robots’ if one believes their [empty] rhetoric. Another missing factor is ‘legal responsibility’ and who should ultimately carry the liability for errors/failures, even fatal ones?

I for one am not convinced of the need or emergency attached to the development pace. I will address part of my own views about the future in my conclusion and the pressing need for the law and regulation to keep up to speed with this pace and not perform reactionary as was the case with human genome project and cloning which only addressed the ethical/legal questions some 18 years after the technology was first developed.[2] The time is right to ensure that these designers, computer programmers, entrepreneurs and governments know the firm boundaries in which they must operate, as I find myself in total agreement with a recent paper on the subject which has correctly identified:

“There is an increasing need for norms to be embedded in technology as the widespread deployment of applications such as autonomous driving, warfare and big data analysis for crime fighting and counter-terrorism becomes ever closer.”[3]

I posit that we will need to develop a legal framework similar to that of international air travel (Warsaw, now Montreal Conventions)[4] enshrined in our own domestic law[5] that will compensate people and families caught up in possible safety errors associated with robotic AI that have fatal consequences. I also think they should carry criminal sanctions for the human negligence breaches which cause them.

Robots, their history and AI

While the debate about exactly what constitutes a ‘robot’ is undecided, robotics and artificial intelligence are not new. Robots have worked well in and for society in such areas as diverse as vending machines, all the way through to autopilots. I will use the development and use of the autopilot and contrast this against the most tested AI in the private travel sphere today, namely self-driving cars.

A recent paper on ‘Law and Regulation of Emerging Robotics and Automation Technologies’ cited what we now use as the ‘laws of robotics’ and they have their genesis in none other than the science fiction of Asimov’s Three Laws of Robotics (1940):

First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Second Law: A robot must obey orders given it by human beings, except when such orders conflict with the First Law.

Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.[6]

Although Asimov sought to imagine possible futures that contained reasonable and beneficial robots, many of his stories demonstrate how the rules lead to complications and contradictions – including Runaround, the actual story in which he introduced these laws. As Arthur C Clarke noted, Asimov wrote many stories that demonstrate that these rules are insufficient, and that at least one more important law should be introduced: the zeroth law.

Zeroth Law: A robot may not injure humanity, or, through inaction, allow humanity to come to harm.[7]

The autopilot system is used to control the direction and altitude of an aircraft without constant 'hands-on' control by a human operator and it was developed in the early 1900’s.[8] Although it requires human oversight, it assists pilots in controlling the aircraft, allowing them to focus on broader aspects of operation, such as monitoring the trajectory, weather and systems. When it was first designed and used, it was hailed as a “remarkable gyro-electric mechanism which holds aircraft … on its course for three hours without human aid”[9]

Nowadays the autopilot system is much more sophisticated with Stability augmentation systems, computer software to control the aircraft, GPS, Instrument-aided landings, the list goes on and on, but with one essential element, a human (actually two people in almost all commercial flights) who ultimately is in control and able to disarm and override the autopilot system at any time.

This can be contrasted with the road testing which is underway (predominantly in the US) with self-driving cars. These self-driving cars are being developed in such a way that there is no human control, hence there are no adequate systems to override the AI. The most recent session of the Economic Commission for Europe Inland Transport Committee Working Party on Road Traffic Safety Sixty (a UN Road Traffic Body) -eighth session Geneva, 24-26 March 2014 resolved that ALL vehicles purporting to be ‘Vehicle systems which influence the way vehicles are driven’ must conform with paragraph 5 of that Article and with paragraph 1 of Article 13 to specifically be able to be “overridden or switched off by the driver”

“Vehicle systems which influence the way vehicles are driven and are not in conformity with the aforementioned conditions of construction, fitting and utilization, shall be deemed to be in conformity with paragraph 5 of this Article and with paragraph 1 of Article 13, when such systems can be overridden or switched off by the driver.”[10]

This is a very important nuanced provision of the regulation is in stark contrast with the current US law trials for self-driving cars. One of the most recent reviews on the topic in the US concludes that it is only ‘probable’ that the self-driving are ‘legal’:

“The short answer is that the computer direction of a motor vehicle’s steering, braking, and accelerating without real-time human input is probably legal. The long answer, contained in the article, provides a foundation for tailoring regulations and understanding liability issues related to these vehicles.

The article's largely descriptive analysis, which begins with the principle that everything is permitted unless prohibited, covers three key legal regimes: the 1949 Geneva Convention on Road Traffic, regulations enacted by the National Highway Traffic Safety Administration (NHTSA), and the vehicle codes of all fifty US states.”[11]

This is clearly an unacceptable situation considering the damage that can be done when working off the premise that everything is permitted unless prohibited. It leads me into the next section on the issues of safety and errors.

Safety and Errors

Firstly let us never forget that all this robot technology is created by computer scientists who are themselves 'fallible human beings'.[12] Within the millions of lines of 'coding' that teams of programmers have developed to allow the most sophisticated of theses robots work almost autonomously, it is a fact that vulnerabilities and errors may (and almost always do) exist. If not, how else could the 'hacker' market penetrate and disable some of the most widely used hardware and software in society:

“YAHOO has confirmed a massive data breach first reported last month, with 500 MILLION users account exposed to hackers”[13]

When even the slightest flaw in machinery or software measured against the public interest as a whole could result in fatal errors, we must not be complacent. In August 2010 a US military helicopter drone on a test flight 'wandered' for more than 30 minutes veering 23 miles off course into Washington DC restricted airspace causing deep concern within government circles as this is supposed to be some of the most restricted airspace in the world.[14]

In another more devastating example in October 2007 a semi-autonomous robotic canon being used by the South African Army supposedly malfunctioned resulting in 9 soldiers being killed and wounding an additional 14.[15] The most disconcerting of all this carnage is the most pressing question of all namely robots and the military. Will or more importantly can, these programmers ever create adequate software that is capable of discerning combatants from noncombatants?

The next question that arises is: who is ultimately responsible for their failings and carnage? Is it the programmers? The builders of the robots? The chain of command that authorized their use? Herein lays the legal questions regarding culpability and liability. These issues (in my view) have not been adequately thought out nor defined.

Harvard law school is just starting to deal with necessary laws or regulations that ‘could’ play a part. Robotic and automation technologies appear poised in the near-to-medium term to enter at least some important ordinary human social settings with little or no regulation. Self-driving cars joining human-driven cars on the roadways are one example that has not been adequately thought out in my view. These emerging technologies can be loosely characterized as robotic machines possessing (greater or lesser) artificial intelligence to enable automated-to-autonomous decision-making by the machine:

“"embodiment" in the human social world, particularly some capability of physical movement and/or mobility; and sensor technologies to provide input to the mechanism to allow it to situate itself in the social world."[16]

These are just some of the issues, the public rightfully has a distrust for good reasons not to believe that all will be well in the ‘world of the future’. A 2010 op-ed from the New York times regarding risks with the Toyota Highlander seeks to argue how the irrational public response to major accidents distorts the real risks at hand. Not only is it cold and utilitarian to focus on “the real risks” and dismiss individual (personal) tragedies, they posit the sheer frequency of automobile accidents (in contrast to, say, airplane accidents) could see marked improvement if we focused on improving “the relevant risks while being mindful of safety – and not paranoid”?[17]

It is difficult to see how the Zeroth law has been adhered to if we can allow ‘some’ incidences to simply fly by the wayside. It is people who currently do much of the ‘work’ that these futurists believe ‘machines’ could be doing better? I for one am not convinced and this flows into my next section.

3D Printing and AI ‘Replacing Jobs/People’?

I recently looked into a Facebook post that was explaining a video of a ‘3D printer’ supposedly able to “build a three bedroom house in a matter of days with a little over $10,000 in cost”. I observed the video very carefully, whilst the walls were being ‘laid’ (for want of a better word) by an automated boom arm pouring continuous concrete, there were many other tasks that still involved considerable ‘human input’ such as the plumbing, steel reinforcement shell, windows, doors and roof and all the paint and decoration. While futurists would celebrate this as ‘an amazing advance’ of technology and AI, I can’t help from seeing all the flaws in this hype.

I am not against technology and advancement of the human race per se, I like most are all in favor of advancing us as a ‘species’ and for us to better manage our own ‘eco-system’ for the prosperity of the human race. But, we must not do this by creating robots with AI capabilities that ultimately seek to ‘replicate and replace’ the human race. If we are not careful, and not ask the deep ethical questions surrounding this technology, we could be doing this at our peril.

We will be allowing people to create something which is either ‘human-like’ or ‘humanoid’ with all the attendant realities which may emanate from such an experiment. We just may end up in a situation where the science fiction of “the terminator” could become science fact. It’s bad enough that we are seeking to define this phenomenon in systems which emanate from science fiction itself, we should not be as equally unintelligent to allow this to proliferate without adequate oversight and regulation.

What is needed right now, are laws and regulations that will render the development of any such activities capable of harming humans, unlawful. At the present time there are very few prohibitions on the different prototypes of robots particularly in the military complex that are currently being tested and which could indeed harm and kill.

A very recent article on “FEAR NOT The optimist’s guide to the robot apocalypse”[18] deals with many of the pro’s and con’s associated with the subject finding argument for further change. The articles cites past examples going back as far as Elizabeth I’s concern for laborers by not wanting to register a ‘knitting patent’ for fear of the job losses that could ensue. Such people are defined as paranoid about what the changes in AI could do.

I am not paranoid as much as I am annoyed that nothing is being done to ensure that industry and government are working within defined, privacy respected, lawful lines of engagement. It is hard to find breaches, when there are no defined boundaries.

In scientific computing and in realistic graphic animation, simulation – that is, step-by-step calculation of the complete trajectory of a physical system – is one of the most common and important modes of calculation. In a recent paper they trace the scope and limits of the use of simulation, with respect to AI tasks that involve high-level physical reasoning. In most cases, simulation can play at best a limited role. Simulation is most effective when the task is prediction, when complete information is available, when a reasonably high quality theory is available, and when the range of scales involved, both temporal and spatial, is not extreme. When these conditions do not hold, simulation is less effective or entirely inappropriate. This article discuss’ twelve features of physical reasoning problems that pose challenges for simulation-based reasoning.[19] Although, it goes without saying, for as good as this technology purports to be, it cannot and will never mimic the most basic skill of a baby learning to walk.

Although the original vision for artificial intelligence was the simulation of (implicitly human) intelligence, research has gradually shifted to autonomous systems that compete with people. A recent paper proposes a constructive alternative: the development of collaborative intelligence[20] which I think has some merit.

Robotic nurses, in many technologically advanced societies, people are not only living longer, but are also having fewer children. This trend has led to a disproportionately large growth rate of the elderly population relative to the labor force. Since many people are living until old age and not enough children are born to make up the difference, there are fewer and fewer resources to take care of older generations. Futurists say: An ageing population and who should be caring for them? = Robots.[21]

Personally, I reject such an automatic preference for robots. Why is this a first world problem, with only a first world solution? Why can’t the richer nations take a greater proportion of people from very poor countries under humanitarian programs to look after these aging populations? Why is in necessary to mine and deplete more fossil fuels producing these robots when there are humans ready, willing and able to perform these tasks? Again I think it is only ‘profit’ that is the main actuator and nothing more.

Conclusion

Robotics and artificial intelligence hold enormous promise but raise significant ethical and legal concerns, including and not limited to privacy. Robotics and artificial intelligence invade privacy in at least three ways. No. 1, they increase our capacity for surveillance. No.2, they introduce new points of access to historically private spaces such as the home and No.3, they trigger hardwired social responses that can threaten several of the values privacy protects. Responding to the privacy implications of robotics and artificial intelligence is likely to require a combination of design, law (regulation), and education.

Whilst we live in a high technology dependent society, this should never seek for us replace our most basic skill of intuition. Intuition and emotional intelligence are things that Artificial Intelligence (thankfully) will never be able to replicate and so therefore humanity should not be 'conned' into substituting ourselves for machines.

The Tesla corporation's Elon Musk is the most famous and outspoken proponent of "futurism" his views have dominated much of the debate about future energy, intelligence and technology. Whilst these advance in and of themselves are good things for humanity we must be on guard to prioritize our resources where they best fit society as a whole and are in the interests of ‘the common good’. It was robots that were unable to assist humanity in the most pressing problem effecting the planet today, the Fukushima nuclear meltdown wherein a “60cm-long Toshiba robot, equipped with a pair of cameras and sensors to gauge radiation levels was left to its fate last month, the plant’s operator, Tokyo Electric Power (Tepco), attempted to play down the failure of yet another reconnaissance mission to determine the exact location and condition of the melted fuel.”[22]

To this end I have advocated in this paper a legal framework that should be adopted to protect human rights and human freedoms that could be sidelined in a non-ending trajectory for growth and profit. Thankfully there are movements encouraging more and more people to take up the opportunity to repair items that nowadays we have been encouraged to simply replace. Outfits such as ifixit.com are resurrecting our basic human skills of "fixing" our appliances and technology devices and they are doing this via social media and video demonstration.

Humans are still discussing and debating moral questions because they are not set in stone. They change and take on new meaning as each advance in human technology propel us beyond the realms of our own world in different situations and circumstances. In other words it is continuously evolving, but we must always be in control of what we create, and not vice versa.

POSTSCRIPT

[PROPHETICALLY] since publishing this article I read the attached news item contained in the Sydney Morning Herald of 12/5/2017. It relates to a Qantas Airbus A330-300 (QF72) which is sometimes completely controlled by computer systems without a human override. It actually malfunctioned, nearly killing all 303 passengers on board. A horribly scary event. Click on the link to read the article:

In another important development, scientists working with AI joined the likes of Stephen Hawking and Elon Musk calling for governmental regulation of AI ASAP. That article can be read by clicking on the following link:

REFERENCES

[1] http://www.smh.com.au/business/innovation/dire-warning-for-those-who-drive-cars-for-a-living-20170309-guuk1b.html accessed 9/3/2017

[2] Artificial Intelligence, Robot ethics: Mapping the issues for a mechanized world, Patrick Lin, Keith Abney, George Bekey

[3] Bench-Capon, T. & Modgil, S. Artif Intell Law (2017). doi:10.1007/s10506-017-9194-9 & https://link.springer.com/article/10.1007/s10506-017-9194-9#CR82 accessed 9/3/2017

[4] The Montreal Convention is an international agreement which updates laws relating to carriers' liability. It is designed to replace the complicated and outdated ‘Warsaw System’ of carriers' liability.

[5] Aviation Legislation Amendment (Liability and Insurance) Act 2012 (the Act) Air carriers' liability and insurance requirements effective 31 March 2013 include the following: the domestic passenger liability cap and mandatory insurance requirements from $500,000 to $725,000 per passenger under the CACL Act

[6] https://cs.stanford.edu/people/eroberts/cs181/projects/2010-11/ComputersMakingDecisions/regulation/index.html Accessed 9/3/2017

[7] Ibid.

[8] The first aircraft autopilot was developed by Sperry Corporation in 1912. The autopilot connected a gyroscopic heading indicator and attitude indicator to hydraulically operated elevators and rudder. https://en.wikipedia.org/wiki/Autopilot Accessed 5/3/2017

[9] Popular Science Monthly, February 1930, p. 22.

[10] http://www.unece.org/fileadmin/DAM/trans/doc/2014/wp1/ECE-TRANS-WP1-145e.pdf Accessed 11/3/2017

[11] http://cyberlaw.stanford.edu/publications/automated-vehicles-are-probably-legal-united-states Accessed 10/3/2017

[12] Bench-Capon, T. & Modgil, S. Artif Intell Law (2017). doi:10.1007/s10506-017-9194-9 & https://link.springer.com/article/10.1007/s10506-017-9194-9#CR82 page 945 Accessed 9/3/2017

[13] http://www.express.co.uk/life-style/science-technology/713216/Yahoo-Confirm-MASSIVE-Data-Breach-200-Accounts-Hack Accessed 12/3/2017

[14] Elisabeth Bumiller, The New York Times: Navy Drone Violated Washington Airspace (Aug. 25, 2010).

[15] Noah Schactman, Robot cannon kills 9, wounds 14, Wired (October 18, 2007). Accessible at http://blog.wired.com/defense/2007/10/robot-cannonki.

html. Last accessed on September 12, 2010.

[16] https://cyber.harvard.edu/getinvolved/studygroups/robots Accessed 9/3/2017

[17] https://opinionator.blogs.nytimes.com/2010/03/09/toyotas-are-safe-enough/?_r=0 Accessed 10/3/2017

[18] https://qz.com/904285/the-optimists-guide-to-the-robot-apocalypse/ Accessed 8/3/2017

[19] http://www.sciencedirect.com/science/article/pii/S0004370215001794 Accessed 5/3/2017

[20] http://www.sciencedirect.com/science/article/pii/S0004370214001568 Accessed 5/3/2017

[21] https://cs.stanford.edu/people/eroberts/cs181/projects/2010-11/ComputersMakingDecisions/robotic-nurses/index.html Accessed 2/3/2017

[22] https://www.theguardian.com/world/2017/mar/09/fukushima-nuclear-cleanup-falters-six-years-after-tsunami?CMP=fb_gu Accessed 9/3/2017

 
 
 

Comments


bottom of page