Themes (326)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 3 min
- Vimeo: Shalini Kantayya
- 2020
A brief visual example of an application of computer vision for facial recognition, how these algorithms can be trained to recognized faces, and the dangers that come with biased data sets, such as a disproportionate amount of white men.
- Vimeo: Shalini Kantayya
- 2020
Coded Bias: How Ignorance Enters Computer Vision
A brief visual example of an application of computer vision for facial recognition, how these algorithms can be trained to recognized faces, and the dangers that come with biased data sets, such as a disproportionate amount of white men.
When thinking about computer vision in relation to projects such as the Aspire Mirror, what sorts of individual and systemic consequences arise for those who have faces that biased computer vision programs do not easily recognize?
-
- 7 min
- New York Times
- 2018
Youtube’s algorithm suggests increasingly radical recommendations to its users, maximising the amount of time they spend on the platform. The tendency toward inflammatory recommendations often leads to political misinformation.
- New York Times
- 2018
-
- 7 min
- New York Times
- 2018
Youtube, The Great Radicalizer
Youtube’s algorithm suggests increasingly radical recommendations to its users, maximising the amount of time they spend on the platform. The tendency toward inflammatory recommendations often leads to political misinformation.
What are the dangers of being offered increasingly radical videos on Youtube?
-
- 5 min
- Wired
- 2019
Monster Match, a game funded by Mozilla, shows how dating app algorithms are reinforcing bias through combining personal and mass aggregated data to systematically hide a vast number of profiles from user sight, effectively caging users into narrow preferences.
- Wired
- 2019
-
- 5 min
- Wired
- 2019
This dating app exposes the monstrous bias of algorithms
Monster Match, a game funded by Mozilla, shows how dating app algorithms are reinforcing bias through combining personal and mass aggregated data to systematically hide a vast number of profiles from user sight, effectively caging users into narrow preferences.
What are some inexplicit ways in which algorithms reinforce biases? Are machine learning algorithms equipped to handle the multiple confounding variables at play in things like dating preferences? Does online dating unquestionably give people more agency in finding a partner?
-
- 5 min
- Wall Street Journal
- 2019
Incorporation of ethical practices and outside perspectives in AI companies for bias prevention is beneficial, and becoming more popular. Spawns from a need for consistent human oversight in algorithms.
- Wall Street Journal
- 2019
-
- 5 min
- Wall Street Journal
- 2019
Investors Urge AI Startups to Inject Early Dose of Ethics
Incorporation of ethical practices and outside perspectives in AI companies for bias prevention is beneficial, and becoming more popular. Spawns from a need for consistent human oversight in algorithms.
How do we have an ethical guardrail around AI? How should tech companies approach gathering outside perspectives on algorithms?
-
- 5 min
- Wired
- 2019
Axon’s novel use of an ethics committee led to a decision to not use facial recognition programs on the body cameras which they provide to police department, on the basis of latent racial bias and privacy concerns. While this is a beneficial step, companies and government offices at multiple levels debate over when and how facial recognition should be deployed and limited.
- Wired
- 2019
-
- 5 min
- Wired
- 2019
Taser User Says It Wont Use Biometrics In BodyCams
Axon’s novel use of an ethics committee led to a decision to not use facial recognition programs on the body cameras which they provide to police department, on the basis of latent racial bias and privacy concerns. While this is a beneficial step, companies and government offices at multiple levels debate over when and how facial recognition should be deployed and limited.
Should facial recognition ever be used in police body cameras, even if it does theoretically evolve to eliminate bias? How can citizens and governments have more power in limiting facial recognition and enforcing a more widespread use of ethics boards?
-
- 5 min
- Time Magazine
- 2017
Chicago police enact an algorithm to calculate a “risk score” for individuals based on factors such as criminal history and age with the aim of assessing and pre-emptively striking against risk. However, these numbers are inherently linked to human bias both in input and outcome, and could lead to unfair targeted of citizens, even as it supposedly introduces objectivity to the system.
- Time Magazine
- 2017
-
- 5 min
- Time Magazine
- 2017
The Police Are Using Computer Algorithms to Tell If You’re a Threat
Chicago police enact an algorithm to calculate a “risk score” for individuals based on factors such as criminal history and age with the aim of assessing and pre-emptively striking against risk. However, these numbers are inherently linked to human bias both in input and outcome, and could lead to unfair targeted of citizens, even as it supposedly introduces objectivity to the system.
Is the police risk score system biased, and does it improve or enhance human bias? Is it plausible to use digital technology to eliminate bias from American policing, or is this impossible? What might this look like? Does reliance on numerical data give police and tech companies more power or less power?