AI (124)

View options:

Find narratives by ethical themes or by technologies.

FILTERreset filters
Themes
  • Privacy
  • Accountability
  • Transparency and Explainability
  • Human Control of Technology
  • Professional Responsibility
  • Promotion of Human Values
  • Fairness and Non-discrimination
Show more themes
Technologies
  • AI
  • Big Data
  • Bioinformatics
  • Blockchain
  • Immersive Technology
Show more technologies
Additional Filters:
  • Media Type
  • Availability
  • Year
    • 1916 - 1966
    • 1968 - 2018
    • 2019 - 2069
  • Duration
  • 10 min
  • MIT Technology Review
  • 2020
image description
We read the paper that forced Timnit Gebru out of Google. Here’s what it says.

This article explains the ethical warnings of Timnit Gebru against training Natural Language Processing algorithms on large language models developed on sets of textual data from the internet. Not only does this process have a negative environmental impact, it also still does not allow these machine learning tools to process semantic nuance, especially as it relates to burgeoning social movements or countries with lower internet access. Dr. Gebru’s refusal to retract this paper ultimately lead to her dismissal from Google.

  • MIT Technology Review
  • 2020
  • 7 min
  • VentureBeat
  • 2021
image description
GPT-3: We’re at the very beginning of a new app ecosystem

The GPT-3 Natural Language Processing model, created by the company open AI and released in 2020, is the most powerful of its kind, using a generalized approach to feed its machine learning algorithm in order to mirror human speech. The potential applications of such a powerful program are manifold, but this potential means that many tech monopolies may want to enter an “arms race” to get the most powerful model possible.

  • VentureBeat
  • 2021
  • 5 min
  • Gizmodo
  • 2021
image description
Bots Reportedly Helped Fuel GameStonks Hype on Facebook, Twitter, and Other Platforms

Thorough investigation led to the conclusion that bots played a role in the economic disruption of GameStop stocks in early 2021. Essentially, the automated accounts aided in the diffusion of materials promoting the purchase and maintenance of GameStop stocks as a ploy to act as a check on wealthy hedge fund managers who bet that the stock would crash. The wholistic effect of these bots in this specific campaign, and thus a measure of how bots may generally be used to cause economic disruption in online markets through interaction with humans, remains hard to read.

  • Gizmodo
  • 2021
  • 7 min
  • New York Times
  • 2018
image description
Facial Recognition Is Accurate, if You’re a White Guy

This article details the research of Joy Buolamwini on racial bias coded into algorithms, specifically facial recognition programs. When auditing facial recognition software from several large companies such as IBM and Face++, she found that they are far worse at properly identifying darker skinned faces. Overall, this reveals that facial analysis and recognition programs are in need of exterior systems of accountability.

  • New York Times
  • 2018
  • 7 min
  • The Verge
  • 2020
image description
What a machine learning tool that turns Obama white can (and can’t) tell us about AI bias

PULSE is an algorithm which can supposedly determine what a face looks like from a pixelated image. The problem: more often than not, the algorithm will return a white face, even when the person from the pixelated photograph is a person of color. The algorithm works through creating a synthetic face which matches with the pixel pattern, rather than actually clearing up the image. It is these synthetic faces that demonstrate a clear bias toward white people, demonstrating how institutional racism makes its way thoroughly into technological design. Thus, diversity in data sets will not full help until broader solutions combatting bias are enacted.

  • The Verge
  • 2020
  • 10 min
  • Gizmodo
  • 2021
image description
Developing Algorithms That Might One Day Be Used Against You

Physicist Brian Nord, who learned about deep learning algorithms through his research on the cosmos, warns against how developing algorithms without proper ethical sensibility can lead to these algorithms having more negative impacts than positive ones. Essentially, an “a priori” or proactive approach to instilling AI ethical sensibility, whether through review institutions or ethical education of developers, is needed to guard against privileged populations using algorithms to maintain hegemony.

  • Gizmodo
  • 2021
Load more