All Narratives (328)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 5 min
- MIT Technology Review
- 2019
Humans take the blame for failures of AI automated systems, protecting the integrity of the technological system and becoming a “liability sponge.” It is necessary to redefine the role of humans in sociotechnical systems.
- MIT Technology Review
- 2019
-
- 5 min
- MIT Technology Review
- 2019
When algorithms mess up, the nearest human gets the blame
Humans take the blame for failures of AI automated systems, protecting the integrity of the technological system and becoming a “liability sponge.” It is necessary to redefine the role of humans in sociotechnical systems.
Should humans take the blame for algorithm-created harm? At what level (development, corporate, or personal) should this liability occur?
-
- 5 min
- MIT Technology Review
- 2019
Introduction to how bias is introduced in algorithms during the data preparation stage, which involves selecting which attributes you want the algorithm to consider. Underlines the difficult nature of ameliorating bias in machine learning, given that algorithms are not always perfectly attuned to human social contexts.
- MIT Technology Review
- 2019
-
- 5 min
- MIT Technology Review
- 2019
This is how AI bias really happens—and why it’s so hard to fix
Introduction to how bias is introduced in algorithms during the data preparation stage, which involves selecting which attributes you want the algorithm to consider. Underlines the difficult nature of ameliorating bias in machine learning, given that algorithms are not always perfectly attuned to human social contexts.
How can the “portability trap” described in the article be avoided? Who should be involved in making decisions about framing problems that AI are meant to solve?
-
- 11 min
- Kinolab
- 1990
Commander Data, an android, uses his technological skills to acquire knowledge to create a new android, his daughter Lal, in his own image without human help or oversight. He then guides Lal through the process of incorporating into the human world through means such as allowing her to choose her own gender and appearance, teaching her about laughter, and warning about human perception of difference. Ultimately, when he is asked to turn his daughter over to Star Fleet, he refuses on the grounds that it is his obligation as Lal’s parent to help her mature and acclimate to society, and captain Picard agrees that Lal is no one’s property but rather Data’s own child.
- Kinolab
- 1990
The Offspring: Robotic Reproduction and Rights to a Parental Role
Commander Data, an android, uses his technological skills to acquire knowledge to create a new android, his daughter Lal, in his own image without human help or oversight. He then guides Lal through the process of incorporating into the human world through means such as allowing her to choose her own gender and appearance, teaching her about laughter, and warning about human perception of difference. Ultimately, when he is asked to turn his daughter over to Star Fleet, he refuses on the grounds that it is his obligation as Lal’s parent to help her mature and acclimate to society, and captain Picard agrees that Lal is no one’s property but rather Data’s own child.
If robots such as Data and Lal exist as close to human sentience as they do, can they ever truly “belong” to anyone? How does Lal’s ability to choose her own appearance and gender (and by extension the capability of humanoid robots to appear in myriad different ways) complicate questions of human identity? Would humans have a right to control technological procreation as a means of limiting singularity?
-
- 2 min
- Wired
- 2019
Synthetic human faces are able to be generated by a machine learning algorithm.
- Wired
- 2019
-
- 2 min
- Wired
- 2019
Artificial Intelligence Is Coming For Our Faces
Synthetic human faces are able to be generated by a machine learning algorithm.
What are some consequences to AI being able to render fake yet believable human faces?
-
- 5 min
- Wired
- 2019
Because of the jobs offered by the booming tech industry, San Francisco has developed a caste-like hierarchy, in which the small group at the top of tech companies are the richest, while other citizens are left to get poorer as the middle class shrinks.
- Wired
- 2019
-
- 5 min
- Wired
- 2019
How Silicon Valley fuels an informal caste system
Because of the jobs offered by the booming tech industry, San Francisco has developed a caste-like hierarchy, in which the small group at the top of tech companies are the richest, while other citizens are left to get poorer as the middle class shrinks.
How does the little social mobility within technology “castes” impact the digital tech workforce? To what degree is human agency on both ends limited by the emergence of artifacts such as Taskrabbit or Instacart?
-
- 7 min
- Vice
- 2019
An academic perspective on an algorithm created by PredPol to “predict crime.” Unless every single crime is reported, and unless and police pursue all types of crimes committed by all people equally, it’s impossible to have a reinforcement learning system that predicts crime itself.Rather, police find crimes in the same places they’ve been told to look for them, feeding the algorithm ineffective data and allowing unjust targeting of communities of color by the police to continue based on trust in the algorithm.
- Vice
- 2019
-
- 7 min
- Vice
- 2019
Academics Confirm Major Predictive Policing Algorithm is Fundamentally Flawed
An academic perspective on an algorithm created by PredPol to “predict crime.” Unless every single crime is reported, and unless and police pursue all types of crimes committed by all people equally, it’s impossible to have a reinforcement learning system that predicts crime itself.Rather, police find crimes in the same places they’ve been told to look for them, feeding the algorithm ineffective data and allowing unjust targeting of communities of color by the police to continue based on trust in the algorithm.
Can an algorithm which claims to predict crime ever be fair? Is it ever justified for volatile actors such as police to act based on directions from a machine, where the logic is not always transparent?