Data | Ethics | Governance

Article Summary – Discussion around decision-making algorithms (Pt. 2)

2013, aeon.co: Slaves to the algorithm

  • “if self-driving cars and speech-policing systems are going to make hard moral decisions for us, we have a serious stake in knowing exactly how they are programmed to do it”
  • “we need to create a class of ‘algorithmic auditors’ — trusted representatives of the public who can peer into the code to see what kinds of implicit political and ethical judgments are buried there, and report their findings back to us”
  • “such false positives undoubtedly occur, too, in the present system of human judgment, but at least we might feel that we can hold those making the decisions responsible”
  • “would it then be acceptable to deny people their freedom on such an algorithmic basis?”
  • “the potential unintended consequences are not as serious as depriving an innocent person of liberty, but they still might be regrettable”
  • “long worried about the online ‘echo chamber’ phenomenon, in which people read only that which reinforces their currently held views”

2014: Nieman Lab: Interviewing the algorithm: How reporting and reverse engineering could build a beat to understand the code that influences us

  • “the optimal solution isn’t always necessarily the best solution”
  • “how do you integrate algorithms into reporting or storytelling?”
  • “became interested in how other journalists were getting to the core questions of algorithm building”
  • “algorithmic power is about autonomous decision making. What are the atomic units of decisions that algorithms make?”
  • “it’s called automation bias. People tend to trust technology rather than not trust technology”
  • Pasquale told journalists to pay attention as laws about accessing technological information develop
  • “some of our algorithmic systems have become so large, they’re more usefully thought of as resembling organisms rather than machines”
  • “systems are also constantly under A/B testing” … “as a result, on major websites, there can be literally millions of different permutations in use at any given moment”
  • “maybe we should try to expose as much as possible, so that the people involved will build more robust classifiers”

2017, ACLU: Pitfalls of Artificial Intelligence Decisionmaking Highlighted In Idaho ACLU Case

  • “one of the biggest civil liberties issues raised by technology today is whether, when, and how we allow computer algorithms to make decisions that affect people’s lives”
  • “when we asked them how the dollar amounts were arrived at, the Medicaid program came back and said, ‘we can’t tell you that, it’s a trade secret'”
  • “it’s just a blatant due process violation to tell people you’re going to reduce their health care services by $20,000 in a year for some secret reason”
  • “they had to throw out two-thirds of the records they had before they came up with the formula because of data entry errors and data that didn’t make sense”
  • “it’s just this bias we all have for computerized results—we don’t question them”
  • “I don’t think anybody at the Medicaid program really thought about how this was working”
  • “one of the time-honored horrors of bureaucracies: the fragmentation of intelligence that (as I have discussed) allows hundreds or thousands of intelligent, ethical individuals to behave in ways that are collectively stupid and/or unethical”
  • “nobody understands [‘these computerized systems’], they think that somebody else does—but in the end we trust them”
  • “as our technological train hurtles down the tracks, we need policymakers at the federal, state, and local level who have a good understanding of the pitfalls involved in using computers to make decisions that affect people’s lives”

2017, Columbia Journalism Review: How to report on algorithms even if you’re not a data whiz

  • “algorithms are ripe for journalistic investigation”
  • “in order to reduce the barrier of entry into reporting on algorithms, we created algorithmtips.org, a database of government algorithms that currently provides more than 150 ledes, and methodological resources for getting started”
  • “government doesn’t always write algorithms itself; often it licenses the code from outside contractors. This creates the need for good old-fashioned government accountability reporting”
  • “the public is subject to two black boxes: the first, the algorithm itself; the second, the secrecy of the private companies”

2017, Science: Q&A: Should artificial intelligence be legally required to explain itself?

  • “a commentary published today in Science Robotics discusses regulatory efforts to make AI more transparent, explainable, and accountable”
  • “regulators around the world are discussing and addressing these issues but sometimes they must satisfy competing interests”
  • “the EU is more inclined [to the US] to create hard laws that are enforceable”
  • Article 22, for example, grants individuals the right to contest a completely automated decision if it has legal or other significant effects on them”

2016, Tektonika: I, algorithm: Can data-driven decision-making lead to dumb results?

  • “data-driven decision-making is … now impacting your chance at scoring a new job, finding a date, or a signing a lease on an apartment”
  • “I look for passion and hustle, and there’s no data algorithm that could ever get to the bottom of that. It’s an intuition, gut feel, chemistry”
  • “biometric data collection on employees is one area with hotly contested ethics”
  • “IT managers need to recognize that technology is only as smart as it’s human creator, and misapplications of artificial intelligence can lead to some really dumb decisions”

2015, The Atlantic: When Discrimination Is Baked Into Algorithms

  • “even a seemingly neutral price model could potentially lead to inadvertent bias—bias that’s hard for consumers to detect and even harder to challenge or prove”
  • “so how will the courts address algorithmic bias?”
  • “what about when big data is used to determine a person’s credit score, ability to get hired, or even the length of a prison sentence?”
  • “expanding disparate impact theory to challenge discriminatory data-mining in court ‘will be difficult technically, difficult legally, and difficult politically'”
  • “some of their fellow organizers [of FAT/ML] are also developing tools they hope companies and government agencies could use to test whether their algorithms yield discriminatory results and to fix them when necessary”

2017, Futurism: AI Won’t Just Replace Workers. It’ll Also Help Them.

  • “the disconnect between understanding what algorithms do, how they work, and how we should be shepherding their use and our ideas about AI are artificially and unreasonably detached”
  • “38 percent of the respondents predicted that the benefits of algorithms will outweigh the detriments for both individuals and society in general, while 37 percent felt the opposite way”
  • “almost all respondents agreed that algorithms are essentially invisible to the public, and that their influence will increase exponentially over the next decade”
  • “advances in algorithms and big data sets will mean corporations and governments hold all of the cards and set all of the parameters”
  • “potential of access to algorithmically-aided living to deepen already existing cultural and political divides”
  • “many respondents do see the age of the algorithm as the age of mass unemployment”
  • “many respondents advocated for public algorithmic literacy education — the computer literacy of the 21st century — and for a system of accountability for those who create and evolve algorithms”

2017, The AU Review: Nine things we learned at the Marketing To The Machines Vivid Ideas panel; an interesting & frightening look at the future

  • “no denying that algorithms are becoming a pervasive part of our lives”
  • “[the use of algorithms] change our brain function so we tend to have lower rates of recall for information itself and enhanced recall of where to find information”
  • notes some other take-aways like Google knowing all and how algorithms are useful, but there are concerns

2017, Pacific Standard: ALEXANDER PEYSAKHOVICH’S THEORY ON ARTIFICIAL INTELLIGENCE

  • “he builds tools that help people make better choices, and machines that can turn data into, as he puts it, ‘not just correlations but actual causal relationships'”
  • not too much of note in the article; refer to his page linked above

2017, Fast Co Design: The Web Is Basically One Giant Targeted Ad Now

  • “on the internet today, ads aren’t just part of the content or interface, they are the content and the interface”
  • “that means the computer is essentially making the decision for the person–we’re just clicking the buy button”
  • Meeker points out that it’s not just that the internet has been gamified into one big store that we can’t help but click on, but that we’re actually obsessed with our own identities as players in this gaming environment”

2017, Lexology: Robotics in the workplace – from a North American and European perspective

  • “a Buddhist temple in China has welcomed a robot monk into its order in an effort to attract new practitioners”
  • “an Audi plant in Germany recently began using a glove with an embedded barcode scanner with its logistics employees to make work more ergonomic and efficient”
  • “Amazon is developing cashier-free grocery stores using IoT-enabled devices, allowing customers to purchase items using their smartphones”
  • “when introducing a new technology that may impact employment or working conditions in France, employers must consult the Works Council and the Health and Safety Committee”
  • “as companies invest in computers and robotics to modernize their processes, organizations should consider the range of employment and labor laws that may be implicated and how courts and regulators are likely to apply them”

2017, HBR.org: 4 Models for Using AI to Make Decisions

  • “empowering algorithms is now as organizationally important as empowering people”
  • “the most painful board conversations that I hear about machine learning revolve around how much power and authority super-smart software should have”
  • “‘handoffs’ and transitions prove to be significant operational problems” … “the resentment and resistance were palpable”
  • “We are constantly updating the models and learning, adding more data, and tweaking how we’re going to make predictions. It feels like a living, breathing thing. It’s a different kind of engineering”
  • “a culture of cocreation and collaboration becomes the only way to succeed”
  • “the software is treated not as inanimate lines of code but as beings with some sort of measurable and accountable agency”

2014, The Conversation: We must be sure that robot AI will make the right decisions, at least as often as humans do

  • “all possible dangerous situations need to be anticipated and accounted for, or resolved by the robots themselves”
  • “make safe decisions about their next move, and when they are able to satisfy our requests”
  • “not clear today where the responsibility lies: with the manufacturer, with the robot, or with its owner”
  • “still a legal framework to be introduced, something that at the moment is still entirely missing”

2015, Nature: Machine ethics: The robot’s dilemma

  • “real-life roboticists are citing Asimov’s laws a lot these days: their creations are becoming autonomous enough to need that kind of guidance”
  • “society’s acceptance of such machines will depend on whether they can be programmed to act in ways that maximize safety, fit in with social norms and encourage trust”
  • “the principles that emerge are not written into the computer code, so ‘you have no way of knowing why a program could come up with a particular rule telling it something is ethically ‘correct’ or not'”
  • “results suggested that even a minimally ethical robot could be useful” … “but the experiment also showed the limits of minimalism”

2015, Nature: Robotics: Ethics of artificial intelligence

  • “two US Defense Advanced Research Projects Agency (DARPA) programmes foreshadow planned uses of LAWS: Fast Lightweight Autonomy (FLA) and Collaborative Operations in Denied Environment (CODE)”
  • “almost all states who are party to the CCW agree with the need for ‘meaningful human control’ over the targeting and engagement decisions made by robotic weapons. Unfortunately, the meaning of ‘meaningful’ is still to be determined”
  • “[the public] hear a mostly one-sided discussion that leaves them worried that robots will take their jobs, fearful that AI poses an existential threat, and wondering whether laws should be passed to keep hypothetical technology ‘under control'”
  • “outreach is ‘yet another thing to do’, and time is limited”
  • “AI and robotics stakeholders worldwide should pool a small portion of their budgets (say 0.1%) to bring together these disjointed communications and enable the field to speak more loudly”
  • “I worry about clinicians’ ability to understand and explain the output of high-performance AI systems”
  • “I believe that the future will be a positive one if humans and robots can help and complement each other”

2015, Business Insider: The real problem with artificial intelligence isn’t what you think

  • “making AI systems completely autonomous is the real threat”
  • “AI systems will not become spontaneously autonomous, they will need to be designed that way”

2016, The Verge: Humanity and AI will be Inseparable

  • “envisions a future in which humans and intelligent systems are inseparable, bound together in a continual exchange of information and goals that she calls ‘symbiotic autonomy'”
  • “building roving, segway-shaped robots called “cobots” to autonomously escort guests from building to building and ask for human help when they fall short”
  • “we are working on the ability for these AI systems to explain themselves, while they learn, while they improve, in order to provide explanations with different levels of detail”
  • “AI systems are now about the data that’s available and the ability to process that data and make sense of it, and we’re still figuring out the best ways to do that”
  • “you can imagine an AI system that helps a researcher digest all that information and finds things that are related to their interests” – yes please!
  • “what’s really interesting is when AI systems can recognize what they’re missing by themselves”

2016, University of Cambridge: Artificial intelligence: computer says YES (but is it right?)

  • “as machine learning techniques become more common in everything from finance to healthcare, the issue of trust is becoming increasingly important”
  • “when [machines] are unsure, we want them to tell us”
  • “the Automatic Statistician explains what it’s doing, in a human-understandable form”

2017, Slate Future Tense: Artificial Intelligence Owes You an Explanation

  • “enter the right to an explanation, a movement to combat the broad move to a “black box society”—a culture that largely accepts we have no way to understand how technology makes many basic decision for us”
  • “the right to an explanation is focused on providing consumers with personalized, easy to understand algorithmic transparency”
  • “consumers should be able to review their personal data and how A.I. relies on it, functions that are anathema to how businesses buy and sell data today”

2015, Edge.org: What, Me Worry? … From Edge 2015 Annual Question Special Edition on “What do you think about machines that think?”

  • “given current power consumption by electronic computers, a computer with the storage and processing capability of the human mind would require in excess of 10 Terawatts of power, within a factor of two of the current power consumption of all of humanity”
  • “any partnership requires some level of trust and loss of control, but if the benefits often outweigh the losses, we preserve the partnership. If they don’t, we sever it”

2016, The New York Times: The Pentagon’s ‘Terminator Conundrum’: Robots That Could Kill on Their Own

  • “the Pentagon has put artificial intelligence at the center of its strategy to maintain the United States’ position as the world’s dominant military power”
  • “when it comes to decisions over life and death, ‘there will always be a man in the loop'”
  • “bringing the two complementary skill sets together is the Pentagon’s goal with centaur warfighting”