Human Control of Technology (67)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 10 min
- The New Yorker
- 2019
Great breakdown of the concerns that come with automating the world without understanding why it works. Provides the principal concerns with the “hidden layer” of artificial neural networks, and how the lack of human understanding of some AI decision making makes these machines susceptible to manipulation.
- The New Yorker
- 2019
-
- 10 min
- The New Yorker
- 2019
The Hidden Costs of Automated Thinking
Great breakdown of the concerns that come with automating the world without understanding why it works. Provides the principal concerns with the “hidden layer” of artificial neural networks, and how the lack of human understanding of some AI decision making makes these machines susceptible to manipulation.
Should we still use technology that we do not have a full understanding of? Might machines play a role in the demise of expertise? How can companies and institutions be held accountable for “lifting the curtain” behind their algorithms?
-
- 5 min
- The Atlantic
- 2019
A book proposes we let robots do all of Earth’s physical labor, creating a world where virtually all human needs are met, in this new ideology called Fully Automated Luxury Communism, or FALC. This is modeled after certain fictions such as Star Trek.
- The Atlantic
- 2019
-
- 5 min
- The Atlantic
- 2019
Give Us Fully Automated Luxury Communism
A book proposes we let robots do all of Earth’s physical labor, creating a world where virtually all human needs are met, in this new ideology called Fully Automated Luxury Communism, or FALC. This is modeled after certain fictions such as Star Trek.
Do you think the FALC ideology is tangibly possible? Which potential problems do you see in its implementation? If robots become advanced enough to become what are essentially human actors, can they be expected to complete all of the world’s labor without compensation?
-
- 15 min
- Hidden Switch
- 2018
A hands-on learning experience about the algorithms used in dating apps through the perspective of a created monster avatar.
- Hidden Switch
- 2018
-
- 15 min
- Hidden Switch
- 2018
Monster Match
A hands-on learning experience about the algorithms used in dating apps through the perspective of a created monster avatar.
How do algorithms in dating apps work? What gaps seemed most prominent to you? What upset you most about the way this algorithm defined you and the choices it offered to you?
-
- 6 min
- n/a
- 2018
Through a series of interactions on a chat and a truth-or-dare type game, the user guesses if they are chatting with a bot or human.
- n/a
- 2018
-
- 6 min
- n/a
- 2018
Bot or Not?
Through a series of interactions on a chat and a truth-or-dare type game, the user guesses if they are chatting with a bot or human.
Are you able to tell the difference between interacting with a bot or human? How? What indicators did you rely on to make your decision?
-
- 5 min
- MIT Technology Review
- 2021
The company Datagen serves as an example of a business which sells synthetic human faces (based on real scans) to other companies to use as training data for AI.
- MIT Technology Review
- 2021
-
- 5 min
- MIT Technology Review
- 2021
These creepy fake humans herald a new age in AI
The company Datagen serves as an example of a business which sells synthetic human faces (based on real scans) to other companies to use as training data for AI.
Does it seem likely that synthetic human data has the power to combat bias, or could it just introduce more bias? Does this represent putting too much trust in machines?
-
- 3 min
- Vimeo: Shalini Kantayya
- 2020
A brief visual example of an application of computer vision for facial recognition, how these algorithms can be trained to recognized faces, and the dangers that come with biased data sets, such as a disproportionate amount of white men.
- Vimeo: Shalini Kantayya
- 2020
Coded Bias: How Ignorance Enters Computer Vision
A brief visual example of an application of computer vision for facial recognition, how these algorithms can be trained to recognized faces, and the dangers that come with biased data sets, such as a disproportionate amount of white men.
When thinking about computer vision in relation to projects such as the Aspire Mirror, what sorts of individual and systemic consequences arise for those who have faces that biased computer vision programs do not easily recognize?