Human Control of Technology (67)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 10 min
- MIT Technology Review
- 2020
This article explains the ethical warnings of Timnit Gebru against training Natural Language Processing algorithms on large language models developed on sets of textual data from the internet. Not only does this process have a negative environmental impact, it also still does not allow these machine learning tools to process semantic nuance, especially as it relates to burgeoning social movements or countries with lower internet access. Dr. Gebru’s refusal to retract this paper ultimately lead to her dismissal from Google.
- MIT Technology Review
- 2020
-
- 10 min
- MIT Technology Review
- 2020
We read the paper that forced Timnit Gebru out of Google. Here’s what it says.
This article explains the ethical warnings of Timnit Gebru against training Natural Language Processing algorithms on large language models developed on sets of textual data from the internet. Not only does this process have a negative environmental impact, it also still does not allow these machine learning tools to process semantic nuance, especially as it relates to burgeoning social movements or countries with lower internet access. Dr. Gebru’s refusal to retract this paper ultimately lead to her dismissal from Google.
How should models for training NLP algorithms be more closely scrutinized? What sorts of voices are needed at the design table to ensure that the impact of such algorithms are consistent across all populations? Can this ever be achieved?
-
- 10 min
- Engadget
- 2021
This article provides an excerpt from a book detailing the “Brooksian Revolution,” a movement in the 1980s pressing the idea that the “intelligence” of AI should start from a foundation of acute awareness of its environment, rather than “typical” indicators of intelligence such as pure logic or problem solving. By principle, a reasoning machine-learning loop that operates off of a one-time perception of its environment is inherently disconnected from its environment.
- Engadget
- 2021
-
- 10 min
- Engadget
- 2021
Hitting the Books: The Brooksian revolution that led to rational robots
This article provides an excerpt from a book detailing the “Brooksian Revolution,” a movement in the 1980s pressing the idea that the “intelligence” of AI should start from a foundation of acute awareness of its environment, rather than “typical” indicators of intelligence such as pure logic or problem solving. By principle, a reasoning machine-learning loop that operates off of a one-time perception of its environment is inherently disconnected from its environment.
Why is an environment important to cognition, both that of humans and machines? Will robots ever be able to abstract the world, or model it, in the same way that the human brain can? Are there dangers to robots being strictly “rational” and decoupled from their environments? Are there dangers to robots being too connected to their environments?
-
- 4 min
- Kinolab
- 2001
“Gigolo Joe” is an android sex worker in an imagined future in which “Mechas,” or humanoid robots, have risen to prominence after a climate disaster. He performs his duties without hiding the fact that he is an android.
- Kinolab
- 2001
Robots and Sex Work
“Gigolo Joe” is an android sex worker in an imagined future in which “Mechas,” or humanoid robots, have risen to prominence after a climate disaster. He performs his duties without hiding the fact that he is an android.
Could robots eventually replace sex workers? What are the ethical and economic implications of this? How will machines be able to perfect seduction?
-
- 12 min
- Kinolab
- 1968
See HAL Part I for further context. In this narrative, astronauts Dave and Frank begin to suspect that the AI which runs their ship, HAL, is malfunctioning and must be shut down. While they try to hide this conversation from HAL, he becomes aware of their plan anyway and attempts to protect himself so that the Discovery mission in space is not jeopardized. He does so by causing chaos on the ship, leveraging his connections to an internet of things to place the crew in danger. Eventually, Dave proceeds with his plan to shut HAL down, despite HAL’s protestations and desire to stay alive.
- Kinolab
- 1968
HAL Part II: Vengeful AI, Digital Murder, and System Failures
See HAL Part I for further context. In this narrative, astronauts Dave and Frank begin to suspect that the AI which runs their ship, HAL, is malfunctioning and must be shut down. While they try to hide this conversation from HAL, he becomes aware of their plan anyway and attempts to protect himself so that the Discovery mission in space is not jeopardized. He does so by causing chaos on the ship, leveraging his connections to an internet of things to place the crew in danger. Eventually, Dave proceeds with his plan to shut HAL down, despite HAL’s protestations and desire to stay alive.
Can AI have lives of their own which humans should respect? Is it considered “murder” if a human deactivates an AI against their will, even if this “will” to live is programmed by another human? What are the ethical implications of removing the “high brain function” of an AI and leaving just the rote task programming? Is this a form of murder too? How can secrets be kept private from an AI, especially if people fail to understand all the capabilities of the machine?
-
- 5 min
- CNN
- 2010
Algorithms and machines can struggle with facial recognition, and need ideal source images to perform it consistently. However, its potential use in monitoring and identifying citizens is concerning.
- CNN
- 2010
-
- 5 min
- CNN
- 2010
Why face recognition isn’t scary — yet
Algorithms and machines can struggle with facial recognition, and need ideal source images to perform it consistently. However, its potential use in monitoring and identifying citizens is concerning.
How have the worries regarding facial recognition changed since 2010? Can we teach machines to identify human faces? How can facial recognition pose a danger/worry when use for governmental purposes?
-
- 2 min
- Kinolab
- 1982
Tron, a security program within the digital world, is thought dead and mourned by fellow programs Yori and Dumont.
- Kinolab
- 1982
Bonding, Creation, and Religion among the Digital
Tron, a security program within the digital world, is thought dead and mourned by fellow programs Yori and Dumont.
Can programmed AI develop emotions and attachment to its maker? Could this be considered a sort of religious freedom for artificial intelligence? If so, is it ethical to use super-intelligent AI without considering its rights?