Fairness and Non-discrimination (56)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 5 min
- Time Magazine
- 2017
Chicago police enact an algorithm to calculate a “risk score” for individuals based on factors such as criminal history and age with the aim of assessing and pre-emptively striking against risk. However, these numbers are inherently linked to human bias both in input and outcome, and could lead to unfair targeted of citizens, even as it supposedly introduces objectivity to the system.
- Time Magazine
- 2017
-
- 5 min
- Time Magazine
- 2017
The Police Are Using Computer Algorithms to Tell If You’re a Threat
Chicago police enact an algorithm to calculate a “risk score” for individuals based on factors such as criminal history and age with the aim of assessing and pre-emptively striking against risk. However, these numbers are inherently linked to human bias both in input and outcome, and could lead to unfair targeted of citizens, even as it supposedly introduces objectivity to the system.
Is the police risk score system biased, and does it improve or enhance human bias? Is it plausible to use digital technology to eliminate bias from American policing, or is this impossible? What might this look like? Does reliance on numerical data give police and tech companies more power or less power?
-
- 5 min
- Wired
- 2019
Axon’s novel use of an ethics committee led to a decision to not use facial recognition programs on the body cameras which they provide to police department, on the basis of latent racial bias and privacy concerns. While this is a beneficial step, companies and government offices at multiple levels debate over when and how facial recognition should be deployed and limited.
- Wired
- 2019
-
- 5 min
- Wired
- 2019
Taser User Says It Wont Use Biometrics In BodyCams
Axon’s novel use of an ethics committee led to a decision to not use facial recognition programs on the body cameras which they provide to police department, on the basis of latent racial bias and privacy concerns. While this is a beneficial step, companies and government offices at multiple levels debate over when and how facial recognition should be deployed and limited.
Should facial recognition ever be used in police body cameras, even if it does theoretically evolve to eliminate bias? How can citizens and governments have more power in limiting facial recognition and enforcing a more widespread use of ethics boards?
-
- 5 min
- Wall Street Journal
- 2019
Incorporation of ethical practices and outside perspectives in AI companies for bias prevention is beneficial, and becoming more popular. Spawns from a need for consistent human oversight in algorithms.
- Wall Street Journal
- 2019
-
- 5 min
- Wall Street Journal
- 2019
Investors Urge AI Startups to Inject Early Dose of Ethics
Incorporation of ethical practices and outside perspectives in AI companies for bias prevention is beneficial, and becoming more popular. Spawns from a need for consistent human oversight in algorithms.
How do we have an ethical guardrail around AI? How should tech companies approach gathering outside perspectives on algorithms?
-
- 5 min
- Wired
- 2019
Monster Match, a game funded by Mozilla, shows how dating app algorithms are reinforcing bias through combining personal and mass aggregated data to systematically hide a vast number of profiles from user sight, effectively caging users into narrow preferences.
- Wired
- 2019
-
- 5 min
- Wired
- 2019
This dating app exposes the monstrous bias of algorithms
Monster Match, a game funded by Mozilla, shows how dating app algorithms are reinforcing bias through combining personal and mass aggregated data to systematically hide a vast number of profiles from user sight, effectively caging users into narrow preferences.
What are some inexplicit ways in which algorithms reinforce biases? Are machine learning algorithms equipped to handle the multiple confounding variables at play in things like dating preferences? Does online dating unquestionably give people more agency in finding a partner?
-
- 5 min
- MIT Technology Review
- 2019
Introduction to how bias is introduced in algorithms during the data preparation stage, which involves selecting which attributes you want the algorithm to consider. Underlines the difficult nature of ameliorating bias in machine learning, given that algorithms are not always perfectly attuned to human social contexts.
- MIT Technology Review
- 2019
-
- 5 min
- MIT Technology Review
- 2019
This is how AI bias really happens—and why it’s so hard to fix
Introduction to how bias is introduced in algorithms during the data preparation stage, which involves selecting which attributes you want the algorithm to consider. Underlines the difficult nature of ameliorating bias in machine learning, given that algorithms are not always perfectly attuned to human social contexts.
How can the “portability trap” described in the article be avoided? Who should be involved in making decisions about framing problems that AI are meant to solve?
-
- 5 min
- GIS Lounge
- 2019
GIS, a relatively new form of computational analysis, can often contain algorithms with biases based on biases present in the training data from open data sources, with this case study focusing on the tendency of power-line identification data being centered around the Western world. This problem can be improved by approaching data collection with more intentionality, either broadening the pool of collected geographic data or inputting artificial images to help the tool recognize a greater number of circumstances and thus become more accurate.
- GIS Lounge
- 2019
-
- 5 min
- GIS Lounge
- 2019
When AI Goes Wrong in Spatial Reasoning
GIS, a relatively new form of computational analysis, can often contain algorithms with biases based on biases present in the training data from open data sources, with this case study focusing on the tendency of power-line identification data being centered around the Western world. This problem can be improved by approaching data collection with more intentionality, either broadening the pool of collected geographic data or inputting artificial images to help the tool recognize a greater number of circumstances and thus become more accurate.
What happens when the source of the data itself (the dataset) is biased? Can the ideas present in this article (namely the intentionally broadening of the training data pool and inclusion of composite data) find application beyond GIS?