Themes (326)

View options:

Find narratives by ethical themes or by technologies.

FILTERreset filters
Themes
  • Privacy
  • Accountability
  • Transparency and Explainability
  • Human Control of Technology
  • Professional Responsibility
  • Promotion of Human Values
  • Fairness and Non-discrimination
Show more themes
Technologies
  • AI
  • Big Data
  • Bioinformatics
  • Blockchain
  • Immersive Technology
Show more technologies
Additional Filters:
  • Media Type
  • Availability
  • Year
    • 1916 - 1966
    • 1968 - 2018
    • 2019 - 2069
  • Duration
  • 3 min
  • Vimeo: Shalini Kantayya
  • 2020
image description
Coded Bias: How Ignorance Enters Computer Vision

A brief visual example of an application of computer vision for facial recognition, how these algorithms can be trained to recognized faces, and the dangers that come with biased data sets, such as a disproportionate amount of white men.

  • Vimeo: Shalini Kantayya
  • 2020
  • 7 min
  • New York Times
  • 2018
image description
Youtube, The Great Radicalizer

Youtube’s algorithm suggests increasingly radical recommendations to its users, maximising the amount of time they spend on the platform. The tendency toward inflammatory recommendations often leads to political misinformation.

  • New York Times
  • 2018
  • 5 min
  • Wired
  • 2019
image description
This dating app exposes the monstrous bias of algorithms

Monster Match, a game funded by Mozilla, shows how dating app algorithms are reinforcing bias through combining personal and mass aggregated data to systematically hide a vast number of profiles from user sight, effectively caging users into narrow preferences.

  • Wired
  • 2019
  • 5 min
  • Wall Street Journal
  • 2019
image description
Investors Urge AI Startups to Inject Early Dose of Ethics

Incorporation of ethical practices and outside perspectives in AI companies for bias prevention is beneficial, and becoming more popular. Spawns from a need for consistent human oversight in algorithms.

  • Wall Street Journal
  • 2019
  • 5 min
  • Wired
  • 2019
image description
Taser User Says It Wont Use Biometrics In BodyCams

Axon’s novel use of an ethics committee led to a decision to not use facial recognition programs on the body cameras which they provide to police department, on the basis of latent racial bias and privacy concerns. While this is a beneficial step, companies and government offices at multiple levels debate over when and how facial recognition should be deployed and limited.

  • Wired
  • 2019
  • 5 min
  • Time Magazine
  • 2017
image description
The Police Are Using Computer Algorithms to Tell If You’re a Threat

Chicago police enact an algorithm to calculate a “risk score” for individuals based on factors such as criminal history and age with the aim of assessing and pre-emptively striking against risk. However, these numbers are inherently linked to human bias both in input and outcome, and could lead to unfair targeted of citizens, even as it supposedly introduces objectivity to the system.

  • Time Magazine
  • 2017
Load more