AI (124)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 5 min
- MIT Technology Review
- 2019
Humans take the blame for failures of AI automated systems, protecting the integrity of the technological system and becoming a “liability sponge.” It is necessary to redefine the role of humans in sociotechnical systems.
- MIT Technology Review
- 2019
-
- 5 min
- MIT Technology Review
- 2019
When algorithms mess up, the nearest human gets the blame
Humans take the blame for failures of AI automated systems, protecting the integrity of the technological system and becoming a “liability sponge.” It is necessary to redefine the role of humans in sociotechnical systems.
Should humans take the blame for algorithm-created harm? At what level (development, corporate, or personal) should this liability occur?
-
- 15 min
- Hidden Switch
- 2018
A hands-on learning experience about the algorithms used in dating apps through the perspective of a created monster avatar.
- Hidden Switch
- 2018
-
- 15 min
- Hidden Switch
- 2018
Monster Match
A hands-on learning experience about the algorithms used in dating apps through the perspective of a created monster avatar.
How do algorithms in dating apps work? What gaps seemed most prominent to you? What upset you most about the way this algorithm defined you and the choices it offered to you?
-
- 15 min
- n/a
- 2018
Choose-your-own-adventure game, in which you experience some sort of data fraud through acting in the position of a cast of characters.
- n/a
- 2018
-
- 15 min
- n/a
- 2018
Choose Your Own Fake News
Choose-your-own-adventure game, in which you experience some sort of data fraud through acting in the position of a cast of characters.
How can you be less vulnerable to fake news and fake advertising online?
-
- 4 min
- Kinolab
- 2017
Luv, a corporate enforcer who is following the android police officer K, tracks his location after he crashes in a landfill and is attacked by a large mob of humans. She then uses drone technology to deploy explosive weapons to save K’s life.
- Kinolab
- 2017
Drone Warfare
Luv, a corporate enforcer who is following the android police officer K, tracks his location after he crashes in a landfill and is attacked by a large mob of humans. She then uses drone technology to deploy explosive weapons to save K’s life.
How can drone technology be used for distant interventions, both for military and personal protection purposes? Is it ethical to use drone tech to kill or injure other people, even if they are criminals or causing harm? Moreover, how can drone tech be used to spy on and follow people without their consent? How does using drone to fight desensitize drivers to the damage which they cause? What broader metaphor is being set up in this narrative, considering the position of Luv, the drone’s controller?
-
- 5 min
- Kinolab
- 2014
In this vignette, Matt describes his backstory as a member of an online community who used technology called “Z-eyes” to walk each other through activities such as flirting with women at bars. The Z-eyes technology directly streams all audiovisual data which his friend Harry experiences to his screen, and Matt is additionally able to use facial recognition and information searches to offer background information which enhances Harry’s plays.
- Kinolab
- 2014
Vicarious Digital Living
In this vignette, Matt describes his backstory as a member of an online community who used technology called “Z-eyes” to walk each other through activities such as flirting with women at bars. The Z-eyes technology directly streams all audiovisual data which his friend Harry experiences to his screen, and Matt is additionally able to use facial recognition and information searches to offer background information which enhances Harry’s plays.
What are some problems with technology such as this being invisible, in terms of privacy? Could this have legitimate therapeutic purposes, such as being a treatment for social anxiety? How should technology like Z-eyes be regulated?
-
- 13 min
- Kinolab
- 2020
George Almore is an engineer working with a company which hopes to achieve singularity with robots, making their artificial intelligence one step above real humans. In doing this, he works with three prototypes: J1, J2, and J3, each one more advanced than the last. Simultaneously, he plans to upload his dead wife’s consciousness into the J3 robot in order to extend her life. The narrative begins with him explaining his goal to J3 as he has this robot go through taste and emotion tests. Eventually, J3 has evolved into a humanoid robot who takes on the traits of George’s wife, leaving the earlier two versions, who all have a sibling-like bond with each other, feeling neglected.
- Kinolab
- 2020
Prototypes, Evolution, and Replacement with Robots
George Almore is an engineer working with a company which hopes to achieve singularity with robots, making their artificial intelligence one step above real humans. In doing this, he works with three prototypes: J1, J2, and J3, each one more advanced than the last. Simultaneously, he plans to upload his dead wife’s consciousness into the J3 robot in order to extend her life. The narrative begins with him explaining his goal to J3 as he has this robot go through taste and emotion tests. Eventually, J3 has evolved into a humanoid robot who takes on the traits of George’s wife, leaving the earlier two versions, who all have a sibling-like bond with each other, feeling neglected.
Are taste and emotion examples of necessary elements of creating advanced AI? If so, why? What good does having these abilities serve in terms of the AI’s relationship to the human world? Is it right to transfer consciousness or elements of consciousness from a deceased person into one or several AI? In the AI, how much is too much similarity to a pre-existing person? Can total similarity ever be achieved, and how? Can advanced AI feel negative human emotions and face mental health problems such as depression? Is it ethical to program AI to feel such emotions, knowing the risks associated with them, including bonding with former or flawed prototypes of itself? If an AI kills itself, does the onus fall on the machine or the human creator?