All Narratives (328)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 4 min
- Kinolab
- 2017
Luv, a corporate enforcer who is following the android police officer K, tracks his location after he crashes in a landfill and is attacked by a large mob of humans. She then uses drone technology to deploy explosive weapons to save K’s life.
- Kinolab
- 2017
Drone Warfare
Luv, a corporate enforcer who is following the android police officer K, tracks his location after he crashes in a landfill and is attacked by a large mob of humans. She then uses drone technology to deploy explosive weapons to save K’s life.
How can drone technology be used for distant interventions, both for military and personal protection purposes? Is it ethical to use drone tech to kill or injure other people, even if they are criminals or causing harm? Moreover, how can drone tech be used to spy on and follow people without their consent? How does using drone to fight desensitize drivers to the damage which they cause? What broader metaphor is being set up in this narrative, considering the position of Luv, the drone’s controller?
-
- 4 min
- Kinolab
- 2017
K is an android who works with the LAPD to track down and destroy escaped older models of “replicants,” or humanoid robots, in a world where androids work as laborers without compensation. In this clip, we meet K’s virtual wife, Joi. Although she is not ‘real,’ it seems like she has real human feelings and presents like a human woman who provides K company and can complete tasks such as making him dinner.
- Kinolab
- 2017
Robot Relationships and Marriage
K is an android who works with the LAPD to track down and destroy escaped older models of “replicants,” or humanoid robots, in a world where androids work as laborers without compensation. In this clip, we meet K’s virtual wife, Joi. Although she is not ‘real,’ it seems like she has real human feelings and presents like a human woman who provides K company and can complete tasks such as making him dinner.
What problems arise from using robotic companions to fulfill gendered tasks? How might this alter perceptions of real people? Consider how Joi is “typecast” as a 50s housewife, and can alter her appearance on command. How could virtual or AI female assistants and robots perpetuate harmful gender norms? Can robots truly love each other, or is this only accomplishable through specific coding? If humans are to give robots a full range of emotions and autonomy to live independently, are humans then responsible for providing them with companions? Would it be more or less uncomfortable if a real human owned and used the Joi holograph, and why?
-
- 6 min
- Vox
- 2020
Even virtual realities with unrealistic yet believable graphics are able to fool the brain’s sense of perception into believing that the digital environment still operates under the same rules as the real world. Connecting the technologies directly to one’s senses is more immersive than looking at a screen; although human brains have been able to process flat images for a long time, the direct sight connection to two screens with virtual reality makes perception a bit more muddled.
- Vox
- 2020
How Virtual Reality Tricks Your Brain
Even virtual realities with unrealistic yet believable graphics are able to fool the brain’s sense of perception into believing that the digital environment still operates under the same rules as the real world. Connecting the technologies directly to one’s senses is more immersive than looking at a screen; although human brains have been able to process flat images for a long time, the direct sight connection to two screens with virtual reality makes perception a bit more muddled.
Should virtual reality ever reach a point where it is indistinguishable from true reality in terms of graphic design or other sensory information? How could such technology be weaponized or abused? How accessible should the most immersive virtual reality technologies be to the general public?
-
- 12 min
- Wired
- 2018
This video offers a basic introduction to the use of machine learning in predictive policing, and how this disproportionately affects low income communities and communities of color.
- Wired
- 2018
How Cops Are Using Algorithms to Predict Crimes
This video offers a basic introduction to the use of machine learning in predictive policing, and how this disproportionately affects low income communities and communities of color.
Should algorithms ever be used in a context where human bias is already rampant, such as in police departments? Why is it that the use of digital technologies to accomplish tasks in this age makes a process seem more “efficient” or “objective”? What are the problems with police using algorithms of which they do not fully understand the inner workings? Is the use of predictive policing algorithms ever justifiable?
-
- 14 min
- Kinolab
- 2016
Westworld, a western-themed amusement park, is populated by realistic robotic creatures known as “hosts” that are designed in a lab and constantly updated to seem as real and organic as possible. Bernard, an engineer at the park, recently oversaw an update to add “reveries,” or slight fake memories, into the coding of the robots to make them seem more human. However, members of the board overseeing the park demonstrate that these reveries can sometimes lead robots to remember and “hold grudges” even after they have been asked to erase their own memory, something that can lead to violent tendencies. Later, as Bernard and Theresa snoop on Ford, the director of the park, they learn shocking information, and a robot once again becomes a violent tool as Ford murders Theresa.
- Kinolab
- 2016
AI Memories and Self-Identification
Westworld, a western-themed amusement park, is populated by realistic robotic creatures known as “hosts” that are designed in a lab and constantly updated to seem as real and organic as possible. Bernard, an engineer at the park, recently oversaw an update to add “reveries,” or slight fake memories, into the coding of the robots to make them seem more human. However, members of the board overseeing the park demonstrate that these reveries can sometimes lead robots to remember and “hold grudges” even after they have been asked to erase their own memory, something that can lead to violent tendencies. Later, as Bernard and Theresa snoop on Ford, the director of the park, they learn shocking information, and a robot once again becomes a violent tool as Ford murders Theresa.
Is ‘memory’ uniquely human? What is the role of memory in creating advanced AI consciousness? Does memory of trauma/suffering ultimately create AI that are hostile to humans? Even if we had the technological means to give AI emotions and memory, should we? And if we do, what ethics and morals must we follow to prevent traumatic memory, such as uploading memories of a fake dead son into Bernard? How can androids which are programmed to follow the directions of one person be used for violent ends? If robots are programmed to not hurt humans, how are they supposed to protect themselves from bad actors, especially if they believe themselves human? Should humans create humanoid replicant robots that do not possess any inherently negative human traits, such as anxiety?