All Narratives (328)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 6 min
- Kinolab
- 2019
Eleanor Shellstrop runs a fake afterlife, in which she conducts an experiment to prove that humans with low ethical sensibility can improve themselves. One of the subjects, Simone, is in deep denial upon arriving in this afterlife, and does as she pleases after convincing herself that nothing is real. Elsewhere, another conductor of the experiment, Jason, kills a robot which has been taunting him since the advent of the experiment.
- Kinolab
- 2019
Resisting Realities and Robotic Murder
Eleanor Shellstrop runs a fake afterlife, in which she conducts an experiment to prove that humans with low ethical sensibility can improve themselves. One of the subjects, Simone, is in deep denial upon arriving in this afterlife, and does as she pleases after convincing herself that nothing is real. Elsewhere, another conductor of the experiment, Jason, kills a robot which has been taunting him since the advent of the experiment.
What are the pros and cons of solipsism as a philosophy? Does it pose a danger of making us act immorally? How can we apply the risk of solipsism to technology such as virtual reality– a space where we know nothing is real except our own feelings and perceptions. Should virtual reality have ethical rules to prevent solipsism from brewing in it? Could that leak into our daily lives as well?
Is it ethical for humans to kill AI beings in fits of negative emotions, such as jealousy? Should this be able to happen on a whim? Should humans have total control of whether AI beings live or die?
-
- 14 min
- Kinolab
- 2019
Ashley O is a pop star who lives and works under the tyrannical direction of her aunt and producer, Catherine. After Ashley decides she wants to rebel against her contract, Catherine places her in a coma and scans her brain to help create a digital likeness of Ashley O and produce new music which the 3D holograph can perform, all under Catherine’s control. Meanwhile, siblings Rachel and Jack hack a robot based on a synaptic snapshot of Ashley O, allowing the virtual consciousness of Ashley O to be reborn in the robot and help plot to take down Catherine. Working together, they manage to thwart the grand debut of the edited holographic version of Ashley O.
- Kinolab
- 2019
Celebrity Autonomy, Producer Tyranny, and Holographic Performances
Ashley O is a pop star who lives and works under the tyrannical direction of her aunt and producer, Catherine. After Ashley decides she wants to rebel against her contract, Catherine places her in a coma and scans her brain to help create a digital likeness of Ashley O and produce new music which the 3D holograph can perform, all under Catherine’s control. Meanwhile, siblings Rachel and Jack hack a robot based on a synaptic snapshot of Ashley O, allowing the virtual consciousness of Ashley O to be reborn in the robot and help plot to take down Catherine. Working together, they manage to thwart the grand debut of the edited holographic version of Ashley O.
How can celebrities keep their autonomy when producers can easily replicate them or their performances? How can musicians and other performers continue to keep a share of credit or profit when producers can easily co-opt their art? Should this technology be used to “extend the life” of musicians, allowing for holographic performances even after they pass away? What are the ethical questions raised with this concept? Should digital consciousnesses be fundamentally limited, especially when they are based on real people? How would this improperly shape the image of a celebrity, either before or after their death?
-
- 12 min
- Kinolab
- 1968
See HAL Part I for further context. In this narrative, astronauts Dave and Frank begin to suspect that the AI which runs their ship, HAL, is malfunctioning and must be shut down. While they try to hide this conversation from HAL, he becomes aware of their plan anyway and attempts to protect himself so that the Discovery mission in space is not jeopardized. He does so by causing chaos on the ship, leveraging his connections to an internet of things to place the crew in danger. Eventually, Dave proceeds with his plan to shut HAL down, despite HAL’s protestations and desire to stay alive.
- Kinolab
- 1968
HAL Part II: Vengeful AI, Digital Murder, and System Failures
See HAL Part I for further context. In this narrative, astronauts Dave and Frank begin to suspect that the AI which runs their ship, HAL, is malfunctioning and must be shut down. While they try to hide this conversation from HAL, he becomes aware of their plan anyway and attempts to protect himself so that the Discovery mission in space is not jeopardized. He does so by causing chaos on the ship, leveraging his connections to an internet of things to place the crew in danger. Eventually, Dave proceeds with his plan to shut HAL down, despite HAL’s protestations and desire to stay alive.
Can AI have lives of their own which humans should respect? Is it considered “murder” if a human deactivates an AI against their will, even if this “will” to live is programmed by another human? What are the ethical implications of removing the “high brain function” of an AI and leaving just the rote task programming? Is this a form of murder too? How can secrets be kept private from an AI, especially if people fail to understand all the capabilities of the machine?
-
- 7 min
- Kinolab
- 1968
Dr. Dave Bowman and Dr. Frank Poole are two astronauts on the mission Discovery to Jupiter. They are joined by HAL, an artificial intelligence machine named after the most recent iteration of his model, the HAL 9000 computer. HAL is seen as just another member of the crew based upon his ability to carry conversations with the other astronauts and his responsibilities for keeping the crew safe.
- Kinolab
- 1968
HAL Part I: AI Camaraderie and Conversation
Dr. Dave Bowman and Dr. Frank Poole are two astronauts on the mission Discovery to Jupiter. They are joined by HAL, an artificial intelligence machine named after the most recent iteration of his model, the HAL 9000 computer. HAL is seen as just another member of the crew based upon his ability to carry conversations with the other astronauts and his responsibilities for keeping the crew safe.
Should humans count on AI entirely to help keep them safe in dangerous situations or environments? Do you agree with Dave’s assessment that one can “never tell” if an AI has real feelings? What counts as “real feelings”? Even if HAL’s human tendencies follow a line of programming, does this make them less real?
-
- 12 min
- Kiniolab
- 1968
In the opening of the film, the viewpoint jumps from the earliest hominids learning how to use the first tools to survive and thrive in the prehistoric era to the age of space travel in an imagined version of the year 2001. In both cases, the scientific innovation surrounds a mysterious, unmarked monolith.
- Kiniolab
- 1968
The Duality of Tools and Runaway Innovation
In the opening of the film, the viewpoint jumps from the earliest hominids learning how to use the first tools to survive and thrive in the prehistoric era to the age of space travel in an imagined version of the year 2001. In both cases, the scientific innovation surrounds a mysterious, unmarked monolith.
How can the most basic of innovations grow to unexpected heights in the span of many years? Could the inventors of the first computers have imagined the modern internet? How can and should innovation be controlled? Is it worth trying to predict what consequences innovation will have millions of years from now? Should the potential positive and negative impacts of certain tools, including digital ones, be thoroughly considered before being put to use, even if their convenience seems to outweigh negative consequences?
-
- 9 min
- Kinolab
- 2014
Will Caster is an artificial intelligence scientist whose consciousness his wife Evelyn uploaded to the internet after his premature death. Dr. Caster used his access to the internet to grant himself vast intelligence, creating a technological utopia called Brightwood in the desert to get enough solar power to develop cutting-edge digital projects. Specifically, he uses nanotechnology to cure fatal or longtime inflictions on people, inserting tiny robots into their bodies to help cells recover. However, it is soon revealed that these nanorobots stay inside their human hosts, allowing Will to project his consciousness into them and generally control them, along with other inhuman traits.
- Kinolab
- 2014
Will, Evelyn, and Max Part II: Medical Nanotechnology and Networked Humans
Will Caster is an artificial intelligence scientist whose consciousness his wife Evelyn uploaded to the internet after his premature death. Dr. Caster used his access to the internet to grant himself vast intelligence, creating a technological utopia called Brightwood in the desert to get enough solar power to develop cutting-edge digital projects. Specifically, he uses nanotechnology to cure fatal or longtime inflictions on people, inserting tiny robots into their bodies to help cells recover. However, it is soon revealed that these nanorobots stay inside their human hosts, allowing Will to project his consciousness into them and generally control them, along with other inhuman traits.
Should nanotechnology be used for medical purposes if it can easily be abused to take away the autonomy of the host? How can use of nanotechnology avoid this critical pitfall? How can seriously injured people consent to such operations in a meaningful way? What are the implications of nanotechnology being used to create technological or real-life underclasses? Should human brains ever be networked to each other, or to any non-human device, especially one that has achieved singularity?