Understanding Algorithmic Bias


Understanding Algorithmic Bias

Algorithm. Machine Learning. Artificial Intelligence. We might feel like we are encountering these terms more and more every day, and yet they can remain mysterious and intimidating to those who do not work directly with them. What should we make of these tools? — Flashy buzzwords, magic bullets for solving any problem, dystopian nightmare in the making? Perhaps a mix of all three, or somewhere in between.

First – let’s unpack these related terms (definitions from NNLM Data Glossary)

  • Algorithm – “a set of instructions that is designed to accomplish a task. Algorithms usually take one or more inputs, run them systematically through a series of steps, and provide one or more outputs.”
  • Artificial Intelligence – “actions that mimic human intelligence displayed by machines and to the field of study focused on this type of intelligence. AI consists of computer programs that are typically built to adaptively update and enhance their own performance over time.”
  • Machine Learning – “a type of Artificial Intelligence. Machine Learning involves sophisticated algorithms which can be trained to sort information, identify patterns, and make predictions within large sets of data.”

Algorithms are all around us – deciding things like which Facebook posts we see, what route our GPS takes, and which results come up first in a Google search. Algorithms are part of decision-making software in domains such as law enforcement, health care, finance, and human resources.

While algorithms and machine learning solutions can seem like magic, it is important to keep in mind that they are built by humans and based on existing and often flawed and incomplete data. What happens when the data used to build an algorithm is based on outdated racist, and/or sexist policies? What if the algorithm cannot be validated because the company that owns it either does not know how it works themselves, or does not want others to know? What if you are contributing data to an algorithm without knowing it?

Join us for a Coded Bias Virtual Discussion Event

If these issues interest you, join CDABS and the HSHSL Diversity and Inclusion Committee for a virtual discussion of the film Coded Bias. Coded Bias, directed by Shalini Kantayya, explores the fallout of MIT Media Lab researcher Joy Buolamwini´s startling discovery that facial recognition does not see dark-skinned faces and women accurately, and her journey to push for the first-ever legislation in the U.S. to govern against bias in the algorithms that impact us all.  

Register below for one of two facilitated 90-minute discussion sessions. We’ll discuss what it means to create artificial intelligence technologies and algorithms that do not encroach upon the civil liberties of people of color, and how this question ties into broader conversations around health equity and social justice. What has the field of Artificial Intelligence gotten right so far and in what direction(s) should it head in the future? Registered participants will receive a link to view the film between November 9th and 16th.

Space for this event is limited – sign up now!

Register for Discussion Session 1, Tues. Nov. 15 from 12:00-1:30 PM

Register for Discussion Session 2, Fri. Nov. 18 from 12:00-1:30 PM

Questions? Contact: Amy Yarnell, data services librarian, at data@hshsl.umaryland.edu.

The Center for Data and Bioinformation Services (CDABS) is the University of Maryland Health Sciences and Human Services Library hub for data and bioinformation learning, services, resources, and communication

Sign up to get DABS delivered to your email or RSS feed.

This entry was posted in Data/Bioinformation, Events, Technology and tagged , , , , . Bookmark the permalink.