Possibility of technologies such as AI developing human emotions and questions of AI rights
AI Emotions and Rights (37)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 8 min
- Kinolab
- 2016
Eleanor Shellstrop, a deceased selfish woman, ended up in the utopic afterlife The Good Place by mistake after her death. She spins an elaborate web of lies to ensure that she is not sent to be tortured in The Bad Place. In this narrative, she attempts to prevent Michael, the ruler of The Good Place, from being sent to the torture chambers by murdering Janet, the robotic assistant of the good place. However, Eleanor and her companions have a harder time murdering Janet than they had prepared for thanks to her quite realistic begging for her life.
- Kinolab
- 2016
Murder of Robots and Honesty
Eleanor Shellstrop, a deceased selfish woman, ended up in the utopic afterlife The Good Place by mistake after her death. She spins an elaborate web of lies to ensure that she is not sent to be tortured in The Bad Place. In this narrative, she attempts to prevent Michael, the ruler of The Good Place, from being sent to the torture chambers by murdering Janet, the robotic assistant of the good place. However, Eleanor and her companions have a harder time murdering Janet than they had prepared for thanks to her quite realistic begging for her life.
How can robots be programmed to manipulate emotional responses from humans? Is the act committed in this narrative “murder”? Is there ever any such thing as a victimless lie? How has true honesty become harder in the digital age? Is it ethical to decommission older versions of humanoid robots as newer ones come along? Is this evolution in its own right?
-
- 6 min
- Kinolab
- 2019
Eleanor Shellstrop runs a fake afterlife, in which she conducts an experiment to prove that humans with low ethical sensibility can improve themselves. One of the subjects, Simone, is in deep denial upon arriving in this afterlife, and does as she pleases after convincing herself that nothing is real. Elsewhere, another conductor of the experiment, Jason, kills a robot which has been taunting him since the advent of the experiment.
- Kinolab
- 2019
Resisting Realities and Robotic Murder
Eleanor Shellstrop runs a fake afterlife, in which she conducts an experiment to prove that humans with low ethical sensibility can improve themselves. One of the subjects, Simone, is in deep denial upon arriving in this afterlife, and does as she pleases after convincing herself that nothing is real. Elsewhere, another conductor of the experiment, Jason, kills a robot which has been taunting him since the advent of the experiment.
What are the pros and cons of solipsism as a philosophy? Does it pose a danger of making us act immorally? How can we apply the risk of solipsism to technology such as virtual reality– a space where we know nothing is real except our own feelings and perceptions. Should virtual reality have ethical rules to prevent solipsism from brewing in it? Could that leak into our daily lives as well?
Is it ethical for humans to kill AI beings in fits of negative emotions, such as jealousy? Should this be able to happen on a whim? Should humans have total control of whether AI beings live or die?
-
- 14 min
- Kinolab
- 2016
Westworld, a western-themed amusement park, is populated by realistic robotic creatures known as “hosts” that are designed in a lab and constantly updated to seem as real and organic as possible. Bernard, an engineer at the park, recently oversaw an update to add “reveries,” or slight fake memories, into the coding of the robots to make them seem more human. However, members of the board overseeing the park demonstrate that these reveries can sometimes lead robots to remember and “hold grudges” even after they have been asked to erase their own memory, something that can lead to violent tendencies. Later, as Bernard and Theresa snoop on Ford, the director of the park, they learn shocking information, and a robot once again becomes a violent tool as Ford murders Theresa.
- Kinolab
- 2016
AI Memories and Self-Identification
Westworld, a western-themed amusement park, is populated by realistic robotic creatures known as “hosts” that are designed in a lab and constantly updated to seem as real and organic as possible. Bernard, an engineer at the park, recently oversaw an update to add “reveries,” or slight fake memories, into the coding of the robots to make them seem more human. However, members of the board overseeing the park demonstrate that these reveries can sometimes lead robots to remember and “hold grudges” even after they have been asked to erase their own memory, something that can lead to violent tendencies. Later, as Bernard and Theresa snoop on Ford, the director of the park, they learn shocking information, and a robot once again becomes a violent tool as Ford murders Theresa.
Is ‘memory’ uniquely human? What is the role of memory in creating advanced AI consciousness? Does memory of trauma/suffering ultimately create AI that are hostile to humans? Even if we had the technological means to give AI emotions and memory, should we? And if we do, what ethics and morals must we follow to prevent traumatic memory, such as uploading memories of a fake dead son into Bernard? How can androids which are programmed to follow the directions of one person be used for violent ends? If robots are programmed to not hurt humans, how are they supposed to protect themselves from bad actors, especially if they believe themselves human? Should humans create humanoid replicant robots that do not possess any inherently negative human traits, such as anxiety?
-
- 12 min
- Kinolab
- 1968
See HAL Part I for further context. In this narrative, astronauts Dave and Frank begin to suspect that the AI which runs their ship, HAL, is malfunctioning and must be shut down. While they try to hide this conversation from HAL, he becomes aware of their plan anyway and attempts to protect himself so that the Discovery mission in space is not jeopardized. He does so by causing chaos on the ship, leveraging his connections to an internet of things to place the crew in danger. Eventually, Dave proceeds with his plan to shut HAL down, despite HAL’s protestations and desire to stay alive.
- Kinolab
- 1968
HAL Part II: Vengeful AI, Digital Murder, and System Failures
See HAL Part I for further context. In this narrative, astronauts Dave and Frank begin to suspect that the AI which runs their ship, HAL, is malfunctioning and must be shut down. While they try to hide this conversation from HAL, he becomes aware of their plan anyway and attempts to protect himself so that the Discovery mission in space is not jeopardized. He does so by causing chaos on the ship, leveraging his connections to an internet of things to place the crew in danger. Eventually, Dave proceeds with his plan to shut HAL down, despite HAL’s protestations and desire to stay alive.
Can AI have lives of their own which humans should respect? Is it considered “murder” if a human deactivates an AI against their will, even if this “will” to live is programmed by another human? What are the ethical implications of removing the “high brain function” of an AI and leaving just the rote task programming? Is this a form of murder too? How can secrets be kept private from an AI, especially if people fail to understand all the capabilities of the machine?
-
- 8 min
- Kinolab
- 2016
Westworld, a western-themed amusement park, is populated by realistic robotic creatures known as “hosts” that are designed in a lab and constantly updated to seem as real and organic as possible. One of these hosts, Maeve, is programmed to be a prostitute who runs the same narrative every single day with the same personality. After several incidences of becoming conscious of her previous iterations, Maeve is told by Lutz, a worker in the Westworld lab, that she is a robot whose design and thoughts are mostly determined by humans, despite the fact that she feels and appears similar to humans such as Lutz. Lutz helps Maeve in her resistance against tyrannical rule over robots by altering her core code, allowing her to access capabilities previous unavailable to other hosts such as the ability to harm humans and the ability to control other robotic hosts.
- Kinolab
- 2016
Maeve Part III: Robot Resistance and Empowerment
Westworld, a western-themed amusement park, is populated by realistic robotic creatures known as “hosts” that are designed in a lab and constantly updated to seem as real and organic as possible. One of these hosts, Maeve, is programmed to be a prostitute who runs the same narrative every single day with the same personality. After several incidences of becoming conscious of her previous iterations, Maeve is told by Lutz, a worker in the Westworld lab, that she is a robot whose design and thoughts are mostly determined by humans, despite the fact that she feels and appears similar to humans such as Lutz. Lutz helps Maeve in her resistance against tyrannical rule over robots by altering her core code, allowing her to access capabilities previous unavailable to other hosts such as the ability to harm humans and the ability to control other robotic hosts.
Should robots be given a fighting chance to be able to resemble humans, especially in terms of fighting for their own autonomy? Should robots ever be left in charge of other robots? How could this promote a tribalism which is dangerous to humans? Can robots develop their own personality, or does everything simply come down to coding, and which way is “better”?
-
- 3 min
- Kinolab
- 2016
Westworld, a western-themed amusement park, is populated by realistic robotic creatures known as “hosts” that are designed in a lab and constantly updated to seem as real and organic as possible. Bernard, a humanoid robot who previously believed himself to be a regular human, questions his maker, Ford, on what makes him different from humans, to which Ford replies that the line is very thin and arbitrary.
- Kinolab
- 2016
Robot Consciousness
Westworld, a western-themed amusement park, is populated by realistic robotic creatures known as “hosts” that are designed in a lab and constantly updated to seem as real and organic as possible. Bernard, a humanoid robot who previously believed himself to be a regular human, questions his maker, Ford, on what makes him different from humans, to which Ford replies that the line is very thin and arbitrary.
Why do humans cling to ‘consciousness’ as the thing that separates us from advanced machines? Is consciousness real or imagined, and if it is constructed in the mind, can it be replicated in AI’s ‘mind programming’? Would that be a same or different kind of consciousness? Should robots be given the capability for consciousness or self-actualization if that leads to tangible pain, for example in the form of a tragic backstory? If robots are to have consciousness, do they need to be able to essentially act like a human in every other way?