Film Clip (143)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 9 min
- Kinolab
- 2016
Eleanor Shellstrop, a deceased selfish woman, ended up in the utopic afterlife The Good Place by mistake after her death. She spins an elaborate web of lies to ensure that she is not sent to be tortured in The Bad Place. In this narrative, the demons of the Bad Place try to wrest Eleanor’s soul away from the Good Place by convincing her that this is where she truly belongs. This resonates with Eleanor, who was always a lone wolf and never found a community of people who she liked. Ultimately, though, she fights to stay in the Good Place because of the fondness she has for the community of people who she knows there.
- Kinolab
- 2016
Community and Belonging
Eleanor Shellstrop, a deceased selfish woman, ended up in the utopic afterlife The Good Place by mistake after her death. She spins an elaborate web of lies to ensure that she is not sent to be tortured in The Bad Place. In this narrative, the demons of the Bad Place try to wrest Eleanor’s soul away from the Good Place by convincing her that this is where she truly belongs. This resonates with Eleanor, who was always a lone wolf and never found a community of people who she liked. Ultimately, though, she fights to stay in the Good Place because of the fondness she has for the community of people who she knows there.
Can our desire to be better outweigh our past actions? How do digital technologies help people find communities where they feel they belong? Does the intention to improve as a person matter just as much as actually improving as a person?
-
- 3 min
- Kinolab
- 2017
Eleanor Shellstrop, a deceased selfish woman, ended up in the utopic afterlife The Good Place by mistake after her death. She spins an elaborate web of lies to ensure that she is not sent to be tortured in The Bad Place. In this narrative, she tracks her personal ethical point total with a technology which is compared to a Fitbit. In theory, the more good actions she completes, the higher her score will get. For another narrative on personal ratings/point tracking, see the narratives “Lacie Parts I and II” on the Black Mirror episode “Nosedive.”
- Kinolab
- 2017
Personal Statistics Tracking
Eleanor Shellstrop, a deceased selfish woman, ended up in the utopic afterlife The Good Place by mistake after her death. She spins an elaborate web of lies to ensure that she is not sent to be tortured in The Bad Place. In this narrative, she tracks her personal ethical point total with a technology which is compared to a Fitbit. In theory, the more good actions she completes, the higher her score will get. For another narrative on personal ratings/point tracking, see the narratives “Lacie Parts I and II” on the Black Mirror episode “Nosedive.”
Do corrupt motivations spoil moral deeds? Should digital technologies be used to track personal data that is more abstract that health statistics or number of steps taken? What would be the consequences if such ratings were public?
-
- 6 min
- Kinolab
- 2019
Eleanor Shellstrop runs a fake afterlife, in which she conducts an experiment to prove that humans with low ethical sensibility can improve themselves. One of the subjects, Simone, is in deep denial upon arriving in this afterlife, and does as she pleases after convincing herself that nothing is real. Elsewhere, another conductor of the experiment, Jason, kills a robot which has been taunting him since the advent of the experiment.
- Kinolab
- 2019
Resisting Realities and Robotic Murder
Eleanor Shellstrop runs a fake afterlife, in which she conducts an experiment to prove that humans with low ethical sensibility can improve themselves. One of the subjects, Simone, is in deep denial upon arriving in this afterlife, and does as she pleases after convincing herself that nothing is real. Elsewhere, another conductor of the experiment, Jason, kills a robot which has been taunting him since the advent of the experiment.
What are the pros and cons of solipsism as a philosophy? Does it pose a danger of making us act immorally? How can we apply the risk of solipsism to technology such as virtual reality– a space where we know nothing is real except our own feelings and perceptions. Should virtual reality have ethical rules to prevent solipsism from brewing in it? Could that leak into our daily lives as well?
Is it ethical for humans to kill AI beings in fits of negative emotions, such as jealousy? Should this be able to happen on a whim? Should humans have total control of whether AI beings live or die?
-
- 7 min
- Kinolab
- 2016
Westworld, a western-themed amusement park, is populated by realistic robotic creatures known as “hosts” that are designed in a lab and constantly updated to seem as real and organic as possible. Dolores, one of these hosts, begins to fall in love with William, a human visitor, and he reciprocates those feelings as he expresses his unhappiness with a planned marriage waiting for him in the real world outside the park. After Dolores is initially angry, she nonetheless rejoins forces with William to search for a place beyond the theme-park Western reality that she has always known.
- Kinolab
- 2016
Relationships and Escapism with AI
Westworld, a western-themed amusement park, is populated by realistic robotic creatures known as “hosts” that are designed in a lab and constantly updated to seem as real and organic as possible. Dolores, one of these hosts, begins to fall in love with William, a human visitor, and he reciprocates those feelings as he expresses his unhappiness with a planned marriage waiting for him in the real world outside the park. After Dolores is initially angry, she nonetheless rejoins forces with William to search for a place beyond the theme-park Western reality that she has always known.
Is William’s love for Dolores ‘true’ love, or is it impossible for a human to truly love an AI and vice versa? If AI are programmed to feel emotions, can their love be equally as real as human love? What issues may arise if robots become a means through which humans escape their real life problems and complicated relationships? What are the potential consequences for both robots and people if robots escape the scenario for which they were specifically engineered, and try to live a life in the real world? Should this be allowed?
-
- 3 min
- Kinolab
- 2020
Nora works as an “angel” figure, or assistant, in the digital afterlife known as Lakeview. Her job is to help digitally immortal residents of this afterlife, such as Nathan, acclimate to their surroundings and their digital existences. However, Nora decides to take her breaks from work in the same virtual reality in which she operates during her job.
- Kinolab
- 2020
Real vs. Virtual Assistance
Nora works as an “angel” figure, or assistant, in the digital afterlife known as Lakeview. Her job is to help digitally immortal residents of this afterlife, such as Nathan, acclimate to their surroundings and their digital existences. However, Nora decides to take her breaks from work in the same virtual reality in which she operates during her job.
Should virtual reality spaces be operated, moderated, and served by human customer service reps, to ensure the best experience? Or is it possible to automate customer service too? How might virtual assistants such as Siri change the nature of people’s relationships with human service reps? Should the people who work on VR projects be given special access to it? Could VR worlds be used as a viable way for employees to relax during breaks?
-
- 13 min
- Kinolab
- 2020
George Almore is an engineer working with a company which hopes to achieve singularity with robots, making their artificial intelligence one step above real humans. In doing this, he works with three prototypes: J1, J2, and J3, each one more advanced than the last. Simultaneously, he plans to upload his dead wife’s consciousness into the J3 robot in order to extend her life. The narrative begins with him explaining his goal to J3 as he has this robot go through taste and emotion tests. Eventually, J3 has evolved into a humanoid robot who takes on the traits of George’s wife, leaving the earlier two versions, who all have a sibling-like bond with each other, feeling neglected.
- Kinolab
- 2020
Prototypes, Evolution, and Replacement with Robots
George Almore is an engineer working with a company which hopes to achieve singularity with robots, making their artificial intelligence one step above real humans. In doing this, he works with three prototypes: J1, J2, and J3, each one more advanced than the last. Simultaneously, he plans to upload his dead wife’s consciousness into the J3 robot in order to extend her life. The narrative begins with him explaining his goal to J3 as he has this robot go through taste and emotion tests. Eventually, J3 has evolved into a humanoid robot who takes on the traits of George’s wife, leaving the earlier two versions, who all have a sibling-like bond with each other, feeling neglected.
Are taste and emotion examples of necessary elements of creating advanced AI? If so, why? What good does having these abilities serve in terms of the AI’s relationship to the human world? Is it right to transfer consciousness or elements of consciousness from a deceased person into one or several AI? In the AI, how much is too much similarity to a pre-existing person? Can total similarity ever be achieved, and how? Can advanced AI feel negative human emotions and face mental health problems such as depression? Is it ethical to program AI to feel such emotions, knowing the risks associated with them, including bonding with former or flawed prototypes of itself? If an AI kills itself, does the onus fall on the machine or the human creator?