Data | Ethics | Governance

AI ethics is not just about the algorithm

So, I have been busy reading.  There really is a lot to catch up on!

The sociology of ethical machines

One theme I am picking up amongst all the various books, papers and articles is that when it comes to the ethics of decision-making algorithms, the issue doesn’t simply lie with understanding how the model produces its output.  The larger issue is how these algorithms fit within a decision-making pipeline – who produces the data?  Where does the data come from? Who designs and tests the model?  And then – most importantly – who takes action on the output?

I think it is when action is taken that we must ask the most important questions.  It is only once a person uses the output of an algorithm to either influence a decision that affects another person, or when a person acts solely on the output of the algorithm that there is a tangible consequence of using the algorithm.  All other up-stream elements have their effect, but it only the action that I think has the most fundamental ethical implication.

Related to this though is the idea that people tend to assume that computers – and by extension, algorithms – are infallible.  There is a tendency to believe what the system produces without question.

A couple of concepts I am also exploring are: agency, power, data integrity.

Here is a great summary of this broader context of algorithms I am thinking about (from Yeung 2017):

“… social scientists typically use the term [algorithms] as an adjective to describe the sociotechnical assemblage which includes, not just algorithms, but also the computational networks in which they function, the people who design and operate them, the data (and users) on which they act, and the institutions that provide these services, all connected to a broader social endeavour and constituting part of a family of authoritative systems for knowledge production.” (italics in original)

Yep, that pretty much sums it up.  Data exists within an ecosystem.  It passes through many hands and (probably) many systems on its way to a final destination within an algorithm.  This journey raises the question of consent.  I have read about an idea for ‘dynamic consent’ – that is mediated through blockchain.  This is a cool idea.

Giving up control to the machines

In a podcast last night I listened to the idea about how decision-making algorithms – such as self-driving cars and ‘fly-by-wire’ aircraft – have the effect of hiding small errors in our actions, but leave open the possibility of unique and challenging problems with the system cannot handle the situation.  This is a problem when we give up control to a machine that is responsible for our life – or many lives.

Sometimes I wonder if it is really a big deal to pursue this ethics of ‘big data’ or of decision-making algorithms.  But then I read another article by yet another analyst who has ‘been around for a long time’ and says that ‘this time is different’ – and by different they mean that the change taking place due to machine learning and automation is a big deal!  How much of a big deal is probably yet to be seen; however, the changes are happening and it is pretty fascinating.

More mental health applications

I was alerted to a new app (Sunrise) that was presented at the recent Techcrunch Disrupt event in New York.  It uses natural language processing to determine the context and sentiment of chat dialogue within a mental health app.  The presentation was pretty convincing, but I am a bit sceptical about the ability for the algorithm to understand the context of the chat.  Sentiment is pretty uncontroversial, but context … I was under the impression that still had some time to go.  But hey, if they have cracked it – cool!

Website and branding

I now have a logo for The Principled Algorithm.  I’m pretty happy with it, but I am still working out how to implement it.  Not to mention the branding required for ‘serious’ publications.