What the heck is Algorithmic Accountability anyways? It’s the concept of being transparent about the tools set up for social systems and the ethics around it. It means using algorithms to understand our social world.
This heady panel talked philosophically about the ethics, values, laws and implications of social systems and covered three specific examples of how this impacts our daily lives.
Facebook Trending Topics
Facebook was recently accused of bias against conservative news stories in its trending topics feature. However, upon further investigation it was found that there were a team of five journalists reviewing a set of data, interpreting it, and making individual judgements. They were drawing on 10 select news sources. The real story was not about Facebook but more about human intervention into the trending topics.
Panelist Alison Powell of the London School of Economics and Political Science insisted that in this example, it is important to understand and dig into the questions of: who are the people making these decisions? How did they get trained? Who are they accountable to? She emphasized the importance of acknowledging the complex nuances of these situations.
Panelist Josh Kroll of CloudFlare explained that every algorithm has a bias and used the example of predictive policing to shine the light on the potential ethical dilemmas that come with this kind of data usage. One type of predictive policing is using analytical techniques to identify potential criminal activity, for example, when there are not enough police to patrol a certain area, a police department will rely on patterns of prior arrests to determine where to put police officers.
A related example in the criminal justice system is a judge using data models to learn how likely an offender might be to reoffend when they are up for parole. These data models, however, include biases around minorities and ethnic groups. The ethical questions raised around this include – is this appropriate use of the data? How do we build mechanisms to address and reveal these issues?
The third example covered the topic of credit scoring: when credit agencies use an algorithm, their “secret sauce,” to determine a credit score. Kroll said that the agencies don’t want consumers to know what that is, but they do want consumers to know that it’s the same across the board — everyone gets the same treatment.
This use of algorithms brought an FTC review of FICO score practices on whether they are discriminatory. It was found that they were not, but the report took four years to be published – which Kroll called, “an unacceptable amount of time.” The FTC is a trusted public entity and needs to be held accountable to share findings in a reasonable amount of time.
Moderator Farida Vis asked the panelists to hone in on the key takeaways that attendees should remember.
- Examine assumptions from the beginning. Make assumptions more clear.
- Create a register of training data as a way to open up the black box.
- Increased accountability means identifying the core values about what is behind decision making.
- Design apps and processes that are trustworthy and not creepy!
- Be open about what is going into these data models: move to transparency and away from the black box.
- Differential privacy: in the new iOS 10, Apple is explaining more about how they are going to collect your data and how your privacy will be protected.
- Ask these questions: What values should a system espouse? Does the data accurately reflect that true state of the world? How does the system reflect your values?
- Some scholars don’t understand the technology and are afraid of it. This could lead to additional regulation which could hamper tool development. There is a dark side to this but we can manage it ethically.
As the panel wrapped, Vis acknowledged that this is a heavy topic but an important one that will undoubtedly get more attention in the years to come. She proposed that 2016 will be THE turning point where Algorithmic Accountability will become more prevalent and better understood.