Algorithmic Bias

Algorithmic Bias – 19 November 2020, 7pm CET

What is algorithmic bias?
Algorithmic bias is a term for unfair outcomes created by systematic errors in computer systems. It describes an outcome in which an arbitrary group of users is privileged over other others due to a variety of socio-technical factors. These include but are not limited to conscious and unconscious choices in algorithm design and training as well as data collection, coding and selection.

More and more areas of society are subject to algorithmic scrutiny. For example, the (health) insurance and financial industry increasingly rely on algorithmic assessments to provide coverage or loans. Similarly, facial recognition is being deployed for identification, policing and surveillance purposes.
Yet, time and again, computer vision algorithms have been shown to discriminate against people with darker skin, either by labelling them in racist terms or by simply working less accurately, with consequences as dramatically as wrongful arrests. Algorithms have also been shown to perpetuate gender biases in areas such as automated translation, recruitment or advertising. Similarly, it has been shown that the performance of NLP algorithms, which power chatbots, is dependent on dialect and sociolect.

It is the purpose of this seminar to discuss the sources of algorithmic bias as well as remedies, both technical and non-technical. In particular, we want to focus on bias in machine learning algorithms, for example the following questions:

  • Which technical design choices work to reduce bias?
  • What role does academic and professional diversity play in solving algorithmic bias?
  • How can we ensure fair technical outcomes in a society full of biases?

Agenda
19:00 “AI & Society”, talk by Mona Sloane, NYU Alliance for Public Interest Technology Fellow
19:45 Q&A
20:10 Discussion in break-out rooms
20:30 End of event

Mona Sloane researches inequality in the context of AI design and policy. A sociologist with the NYU’s Institute for Public Knowledge (IPK), she is the convener of the ‘Co-Opting AI’ series. Mona is an Adjunct Professor at NYU’s Tandon School of Engineering, holds a PhD from the London School of Economics and Political Science and has completed fellowships at the University of California, Berkeley, and at the University of Cape Town. Her most recent project is ’Terra Incognita: Mapping NYC’s New Digital Public Spaces in the COVID-19 Outbreak’, which she leads as principal investigator.

Online Resources – examples of algorithmic bias in media and academic research:

…in criminal sentencing:
https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

… in computer vision/object detection:
https://arxiv.org/abs/1902.11097

… in health tech:
https://science.sciencemag.org/content/366/6464/447

…in educational performance evaluation:

https://www.sophieheloisebennett.com/posts/a-levels-2020/

…how NOT to create a database:

https://www.amnesty.org.uk/london-trident-gangs-matrix-metropolitan-police