Data | Ethics | Governance

Book Summary: Moral Machines

In this post I will review and discuss some aspects of ‘Moral Machines’ by Wendell Wallach and Colin Allen (2008).  This is a book about the ethics of machines making a decision for us.

 


Introduction

“Within the next few years, we predict there will be a catastrophic incident brought about by a computer system making a decision independent of human oversight.”

Pretty dire!  I don’t believe this has happened – that I can think of.  But it certainly does sound plausible.  In fact, I am surprised I cannot think of any examples of this happening.  The vast range of systems making decisions that could impact on humans would have to include interactions that had not been anticipated.  Perhaps the safe-guards are good enough?

“ethical subroutines”

It is an interesting idea, but what does it mean?  We can’t reasonably insert a generic morality piece of code anywhere a system needs to make a decision.

“Many philosophers of technology have warned about humans abdicating responsibility to machines.”

This reminds me of the United Airlines situation recently.  I wasn’t there so I have no idea, but an article pointed to the fact that business processes had failed.  An algorithm decided that a particular passenger was low enough on the value spectrum and he was booted.  But, as we know, he refused because he had work to be done the next day.  So, the failed process was the crew fall-back to treat the passenger as non-compliant – which I assume in these post-9/11 days is a big deal on airlines.  The next failed process stems from the reaction of the police.  It was as if the passenger was a threat.

I think in this case people did abdicate responsibility to the decision of a machine.  However, everything that flowed from the machine’s decision was a human process; machines do not operate in a vacuum.

“even three simple rules (later four) [Asimov’s] can give rise to many ethical dilemmas”

Algorithms themselves are like this.  The complexity is not in the code itself, it is in what emerges from the thousands of iterations the models make (for example, in the training of a deep neural network) to adjust how connections strengthen or weaken.  The wording of the algorithm cannot easily explain why those connections were made.

Emergence of complexity is going to be the key difficulty with systems that rely on neural networks for decision-making.

“machine morality has the potential to stimulate new lines of enquiry in ethics”

This reminds me of the Alpha Go story.  The human opponent saw moves that he would not have made, or would not think – in isolation – was a good move.  But the outcome was that he learnt some new ways of playing the game because the machine could calculate across a much larger decision space than any human could ever hope to do.

So, by extension, the ethical issues that AI throws up may allow us to think in new ways about ethics in general.  We may stumble across interesting insights about how decisions are made, and how we may avoid our decision-making errors.

Chapter 1 – Why Machine Morality?

“There is a long tradition in ethics of regarding actions as being more blameworthy than inactions.”

This is quite interesting.  Logically an action or inaction should be treated the same, but we have a bias in how we feel about the decision if a person is active rather than passive in the situation.  This makes me think of the fact that we dislike losses more than we like gains – even if the magnitude is the same.  I’m not sure how these ideas are connected exactly – but they seem related!

“Where humans were involved [in dealing with a power blackout], they sometimes compounded the problems because of inadequate information or lack of effective communication.”

This is where machines can shine – they do things quickly, efficiently, and don’t argue or care about personality.  The concern, of course, is to ensure that whatever instructions the machine is following are safe and effective; but once they do act, they are decisive.

“the real potential for harm is most likely to emerge from an unanticipated combination of events.”

Remind anyone of a Flash Crash?!  The connectivity of things that each have decision-making capabilities has an almost infinite complexity.  It is hard to think through the implications of this.  But it is rightly a significant problem.  We almost need supercomputers to model the connectivity of the systems we are creating.  It would be a fascinating experiment to look at how small changes in one part of the system could ripple out through a web of decision-making systems.  If this could be visualised it would be mesmerising.


So far the author is setting the scene – machines are making more, and more complex, decisions; this can be bad; we need to think about it.

Sure, we do need to think about it – A LOT!  But I think a considerable problem will always be – how do we distill our human values and decisions into a machine format?  It is without question that things will go wrong as systems are connected to each other in more complex ways with incomprehensible consequences.  But how will we feel about it?

Emotion – which is completely removed from the AI equation so far (affective computing notwithstanding) – will probably be a major factor throughout the transition to machines making decisions.

I find it amusing that the more we want machines to automate and augment our existence, the all-too-human elements like emotion, communication, relationships, and storytelling become more important.

Perhaps the result of machines taking over so much of our day-to-day drudgery is that we will have the time and energy to start focussing on each other again.  Maybe our industrial age has actually been an aberration.  It artificially occupied our time and energy on meaningless crap just to make a living.  And all the while we neglected ‘the other’.

I recommend anyone reading this to check out the 2017 TED talks.  The whole focus is on humans as a collective.

Chapter 2 – Engineering Morality

‘engineering ethics’ – “over the past twenty-five year [one of the major accomplishments] has been the raising of engineers’ awareness of the way their own values influence the design process and their sensitivity to the values of others during it.”

Ok, so this is good news.  The concern about how to consider the ethics of using decision-making algorithms has already been addressed in other mission-critical engineering applications.  If we can engineer a space rocket to land on the Moon or traverse Mars reliably, then we already have the conceptual tools (if not the programmatic tools) to work through the issues of deep neural networks and their opaque processing.

In the context of a medical information application of an algorithm – “… the practitioners must decide whether or not to follow its recommendations.”

This is interesting because it highlights the issue of agency in the decision-making process.  The algorithm produces a result which the human ‘in the loop’ is still left to action or not.  Will we reduce the agency of the practitioner and force the human to obey the algorithm; or will the final decision still reside with the human?  If it is the latter – which is the likely case at the moment – then what we are talking about is augmented decision-making.  The last leg of action will remain with a human – whether a GP or a surgeon.  Until an algorithm can decide on an action and then automatically implement that decision, we are definitely still in the same realm as the standard code of ethics a practitioner already follows.

“Designers of autonomous systems who choose to ignore the broader consequences of the decisions made by their machines are implicitly embedding a particular set of values into these systems.”

Indeed.  A reason to consult widely and ensure diverse opinions are incorporated in the discussion before these autonomous systems are brought on-line.

“The additional complexity with make the systems even less accessible to direct human monitoring, and this in turn will require a new level of self-monitoring.”

This is where it gets a bit murky.  Work is ongoing to verify the neural network algorithms (e.g. just recently – https://arxiv.org/pdf/1705.01320.pdf).  But it appears the application is still limited and not totally satisfactory.

I am pretty sure this will be resolved.  However, I do hope for some visual ways to explain the inner working of these networks.

Any system that includes lethal force (law enforcement or military applications) are going to be good case studies for the ability to operate safely.