AI (124)

View options:

Find narratives by ethical themes or by technologies.

FILTERreset filters
Themes
  • Privacy
  • Accountability
  • Transparency and Explainability
  • Human Control of Technology
  • Professional Responsibility
  • Promotion of Human Values
  • Fairness and Non-discrimination
Show more themes
Technologies
  • AI
  • Big Data
  • Bioinformatics
  • Blockchain
  • Immersive Technology
Show more technologies
Additional Filters:
  • Media Type
  • Availability
  • Year
    • 1916 - 1966
    • 1968 - 2018
    • 2019 - 2069
  • Duration
  • 5 min
  • MIT Technology Review
  • 2019
image description
This is how AI bias really happens—and why it’s so hard to fix

Introduction to how bias is introduced in algorithms during the data preparation stage, which involves selecting which attributes you want the algorithm to consider. Underlines the difficult nature of ameliorating bias in machine learning, given that algorithms are not always perfectly attuned to human social contexts.

  • MIT Technology Review
  • 2019
  • 5 min
  • Wired
  • 2019
image description
This dating app exposes the monstrous bias of algorithms

Monster Match, a game funded by Mozilla, shows how dating app algorithms are reinforcing bias through combining personal and mass aggregated data to systematically hide a vast number of profiles from user sight, effectively caging users into narrow preferences.

  • Wired
  • 2019
  • 5 min
  • Wall Street Journal
  • 2019
image description
Investors Urge AI Startups to Inject Early Dose of Ethics

Incorporation of ethical practices and outside perspectives in AI companies for bias prevention is beneficial, and becoming more popular. Spawns from a need for consistent human oversight in algorithms.

  • Wall Street Journal
  • 2019
  • 5 min
  • Wired
  • 2019
image description
Taser User Says It Wont Use Biometrics In BodyCams

Axon’s novel use of an ethics committee led to a decision to not use facial recognition programs on the body cameras which they provide to police department, on the basis of latent racial bias and privacy concerns. While this is a beneficial step, companies and government offices at multiple levels debate over when and how facial recognition should be deployed and limited.

  • Wired
  • 2019
  • 5 min
  • Time Magazine
  • 2017
image description
The Police Are Using Computer Algorithms to Tell If You’re a Threat

Chicago police enact an algorithm to calculate a “risk score” for individuals based on factors such as criminal history and age with the aim of assessing and pre-emptively striking against risk. However, these numbers are inherently linked to human bias both in input and outcome, and could lead to unfair targeted of citizens, even as it supposedly introduces objectivity to the system.

  • Time Magazine
  • 2017
  • 10 min
  • The Washington Post
  • 2019
image description
FBI, ICE find state driver’s license photos are a gold mine for facial-recognition searches

Law enforcement officials at Federal and state levels, notably the FBI and ICE, use state driver’s license photo databases as a repository for facial recognition software. Such capabilities allow DMVs to help law enforcement in finding those suspected of a crime, undocumented immigrants, or even witnesses. Ultimately, states allow this to happen with certain stipulations, feeding into a concerning system of facial recognition and breach of trust. There is not a solid established system for citizen consent to such monitoring.

  • The Washington Post
  • 2019
Load more