Algorithms selectively favoring certain groups or demographics.
Algorithmic Bias (23)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 10 min
- New York Times
- 2019
Racial bias in facial recognition software used for Government Civil Surveillance in Detroit. Racially biased technology. Diminishes agency of minority groups and enhances latent human bias.
- New York Times
- 2019
-
- 10 min
- New York Times
- 2019
As Cameras Track Detroit’s Residents, a Debate Ensues Over Racial Bias
Racial bias in facial recognition software used for Government Civil Surveillance in Detroit. Racially biased technology. Diminishes agency of minority groups and enhances latent human bias.
What are the consequences of employing biased technologies to survey citizens? Who loses agency, and who gains agency?
-
- 15 min
- Hidden Switch
- 2018
A hands-on learning experience about the algorithms used in dating apps through the perspective of a created monster avatar.
- Hidden Switch
- 2018
-
- 15 min
- Hidden Switch
- 2018
Monster Match
A hands-on learning experience about the algorithms used in dating apps through the perspective of a created monster avatar.
How do algorithms in dating apps work? What gaps seemed most prominent to you? What upset you most about the way this algorithm defined you and the choices it offered to you?
-
- 10 min
- MIT Technology Review
- 2020
This article explains the ethical warnings of Timnit Gebru against training Natural Language Processing algorithms on large language models developed on sets of textual data from the internet. Not only does this process have a negative environmental impact, it also still does not allow these machine learning tools to process semantic nuance, especially as it relates to burgeoning social movements or countries with lower internet access. Dr. Gebru’s refusal to retract this paper ultimately lead to her dismissal from Google.
- MIT Technology Review
- 2020
-
- 10 min
- MIT Technology Review
- 2020
We read the paper that forced Timnit Gebru out of Google. Here’s what it says.
This article explains the ethical warnings of Timnit Gebru against training Natural Language Processing algorithms on large language models developed on sets of textual data from the internet. Not only does this process have a negative environmental impact, it also still does not allow these machine learning tools to process semantic nuance, especially as it relates to burgeoning social movements or countries with lower internet access. Dr. Gebru’s refusal to retract this paper ultimately lead to her dismissal from Google.
How should models for training NLP algorithms be more closely scrutinized? What sorts of voices are needed at the design table to ensure that the impact of such algorithms are consistent across all populations? Can this ever be achieved?
-
- 7 min
- New York Times
- 2018
This article details the research of Joy Buolamwini on racial bias coded into algorithms, specifically facial recognition programs. When auditing facial recognition software from several large companies such as IBM and Face++, she found that they are far worse at properly identifying darker skinned faces. Overall, this reveals that facial analysis and recognition programs are in need of exterior systems of accountability.
- New York Times
- 2018
-
- 7 min
- New York Times
- 2018
Facial Recognition Is Accurate, if You’re a White Guy
This article details the research of Joy Buolamwini on racial bias coded into algorithms, specifically facial recognition programs. When auditing facial recognition software from several large companies such as IBM and Face++, she found that they are far worse at properly identifying darker skinned faces. Overall, this reveals that facial analysis and recognition programs are in need of exterior systems of accountability.
What does exterior accountability for facial recognition software look like, and what should it look like? How and why does racial bias get coded into technology, whether explicitly or implicitly?
-
- 10 min
- Gizmodo
- 2021
Physicist Brian Nord, who learned about deep learning algorithms through his research on the cosmos, warns against how developing algorithms without proper ethical sensibility can lead to these algorithms having more negative impacts than positive ones. Essentially, an “a priori” or proactive approach to instilling AI ethical sensibility, whether through review institutions or ethical education of developers, is needed to guard against privileged populations using algorithms to maintain hegemony.
- Gizmodo
- 2021
-
- 10 min
- Gizmodo
- 2021
Developing Algorithms That Might One Day Be Used Against You
Physicist Brian Nord, who learned about deep learning algorithms through his research on the cosmos, warns against how developing algorithms without proper ethical sensibility can lead to these algorithms having more negative impacts than positive ones. Essentially, an “a priori” or proactive approach to instilling AI ethical sensibility, whether through review institutions or ethical education of developers, is needed to guard against privileged populations using algorithms to maintain hegemony.
What would an ideal algorithmic accountability organization or process look like? What specific ethical regions should AI developers study before creating their algorithms? How can algorithms or other programs created for one context, such as scientific research or learning, be misused in other contexts?
-
- 7 min
- Farnam Street Blog
- 2021
Discusses the main lessons from two recent books explaining how algorithmic bias occurs and how it may be ameliorated. Essentially, algorithms are little more than mathematical operations, but their lack of transparency and the bad, unrepresentative data sets which train them mean their pervasive use becomes dangerous.
- Farnam Street Blog
- 2021
-
- 7 min
- Farnam Street Blog
- 2021
A Primer on Algorithms and Bias
Discusses the main lessons from two recent books explaining how algorithmic bias occurs and how it may be ameliorated. Essentially, algorithms are little more than mathematical operations, but their lack of transparency and the bad, unrepresentative data sets which train them mean their pervasive use becomes dangerous.
How can data sets fed to algorithms be properly verified? What would the most beneficial collaboration between humans and algorithms look like?