Film Clip (143)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 13 min
- Kinolab
- 2002
In the year 2054, the PreCrime police program is about to go national. At PreCrime, three clairvoyant humans known as “PreCogs” are able to forecast future murders by streaming audiovisual data which provides the surrounding details of the crime, including the names of the victims and perpetrators. Although there are no cameras, the implication is that anyone can be under constant surveillance by this program. Once the “algorithm” has gleaned enough data about the future crime, officers move out to stop the murder before it happens.
- Kinolab
- 2002
Preventative Policing and Surveillance Information
In the year 2054, the PreCrime police program is about to go national. At PreCrime, three clairvoyant humans known as “PreCogs” are able to forecast future murders by streaming audiovisual data which provides the surrounding details of the crime, including the names of the victims and perpetrators. Although there are no cameras, the implication is that anyone can be under constant surveillance by this program. Once the “algorithm” has gleaned enough data about the future crime, officers move out to stop the murder before it happens.
How will predicted crime be prosecuted? Should predicted crime be prosecuted? How could technologies such as the ones shown here be affected for the worse by human bias? How would these devices make racist policing practices even worse? Would certain communities be targeted? Is there ever any justification for constant civil surveillance?
-
- 7 min
- Kinolab
- 2002
In the year 2054, the PreCrime police program is about to go national. At PreCrime, three clairvoyant humans known as “PreCogs” are able to forecast future murders by streaming audiovisual data which provides the surrounding details of the crime, including the names of the victims and perpetrators. Joe Anderson, the former head of the PreCrime policing program, is named as a future perpetrator and must flee from his former employer. Due to the widespread nature of retinal scanning biometric technology, he is found quickly, and thus must undergo an eye transplant. While recovering in a run-down apartment, the PreCrime officers deploy spider-shaped drones to scan the retinas of everyone in the building.
- Kinolab
- 2002
Retinal Scans and Immediate Identification
In the year 2054, the PreCrime police program is about to go national. At PreCrime, three clairvoyant humans known as “PreCogs” are able to forecast future murders by streaming audiovisual data which provides the surrounding details of the crime, including the names of the victims and perpetrators. Joe Anderson, the former head of the PreCrime policing program, is named as a future perpetrator and must flee from his former employer. Due to the widespread nature of retinal scanning biometric technology, he is found quickly, and thus must undergo an eye transplant. While recovering in a run-down apartment, the PreCrime officers deploy spider-shaped drones to scan the retinas of everyone in the building.
Is it possible that people would consent to having their retinas scanned in general public places if it meant a more personalized experience of that space? Should government be able to deceive people into giving up their private data, as social media companies already do? How can people protect themselves from retinal scanning and other biometric identification technologies on small and large scales?
-
- 8 min
- Kinolab
- 2016
Eleanor Shellstrop, a deceased selfish woman, ended up in the utopic afterlife The Good Place by mistake after her death. She spins an elaborate web of lies to ensure that she is not sent to be tortured in The Bad Place. In this narrative, she attempts to prevent Michael, the ruler of The Good Place, from being sent to the torture chambers by murdering Janet, the robotic assistant of the good place. However, Eleanor and her companions have a harder time murdering Janet than they had prepared for thanks to her quite realistic begging for her life.
- Kinolab
- 2016
Murder of Robots and Honesty
Eleanor Shellstrop, a deceased selfish woman, ended up in the utopic afterlife The Good Place by mistake after her death. She spins an elaborate web of lies to ensure that she is not sent to be tortured in The Bad Place. In this narrative, she attempts to prevent Michael, the ruler of The Good Place, from being sent to the torture chambers by murdering Janet, the robotic assistant of the good place. However, Eleanor and her companions have a harder time murdering Janet than they had prepared for thanks to her quite realistic begging for her life.
How can robots be programmed to manipulate emotional responses from humans? Is the act committed in this narrative “murder”? Is there ever any such thing as a victimless lie? How has true honesty become harder in the digital age? Is it ethical to decommission older versions of humanoid robots as newer ones come along? Is this evolution in its own right?
-
- 14 min
- Kinolab
- 2016
Westworld, a western-themed amusement park, is populated by realistic robotic creatures known as “hosts” that are designed in a lab and constantly updated to seem as real and organic as possible. Bernard, an engineer at the park, recently oversaw an update to add “reveries,” or slight fake memories, into the coding of the robots to make them seem more human. However, members of the board overseeing the park demonstrate that these reveries can sometimes lead robots to remember and “hold grudges” even after they have been asked to erase their own memory, something that can lead to violent tendencies. Later, as Bernard and Theresa snoop on Ford, the director of the park, they learn shocking information, and a robot once again becomes a violent tool as Ford murders Theresa.
- Kinolab
- 2016
AI Memories and Self-Identification
Westworld, a western-themed amusement park, is populated by realistic robotic creatures known as “hosts” that are designed in a lab and constantly updated to seem as real and organic as possible. Bernard, an engineer at the park, recently oversaw an update to add “reveries,” or slight fake memories, into the coding of the robots to make them seem more human. However, members of the board overseeing the park demonstrate that these reveries can sometimes lead robots to remember and “hold grudges” even after they have been asked to erase their own memory, something that can lead to violent tendencies. Later, as Bernard and Theresa snoop on Ford, the director of the park, they learn shocking information, and a robot once again becomes a violent tool as Ford murders Theresa.
Is ‘memory’ uniquely human? What is the role of memory in creating advanced AI consciousness? Does memory of trauma/suffering ultimately create AI that are hostile to humans? Even if we had the technological means to give AI emotions and memory, should we? And if we do, what ethics and morals must we follow to prevent traumatic memory, such as uploading memories of a fake dead son into Bernard? How can androids which are programmed to follow the directions of one person be used for violent ends? If robots are programmed to not hurt humans, how are they supposed to protect themselves from bad actors, especially if they believe themselves human? Should humans create humanoid replicant robots that do not possess any inherently negative human traits, such as anxiety?
-
- 7 min
- Kinolab
- 2016
Eleanor Shellstrop, a deceased selfish woman, ended up in the utopic afterlife The Good Place by mistake after her death. She spins an elaborate web of lies to ensure that she is not sent to be tortured in The Bad Place. In this narrative, her friend and ethics teacher, Chidi, teaches her about the ethical concepts of utilitarianism, or providing for as much net good impact as possible, and contractualism, or reciprocally upholding promises. For more overall context on the plotting of the series, see the Wikipedia page for Season One: https://en.wikipedia.org/wiki/The_Good_Place_(season_1).
- Kinolab
- 2016
Utilitarianism and Contractualism
Eleanor Shellstrop, a deceased selfish woman, ended up in the utopic afterlife The Good Place by mistake after her death. She spins an elaborate web of lies to ensure that she is not sent to be tortured in The Bad Place. In this narrative, her friend and ethics teacher, Chidi, teaches her about the ethical concepts of utilitarianism, or providing for as much net good impact as possible, and contractualism, or reciprocally upholding promises. For more overall context on the plotting of the series, see the Wikipedia page for Season One: https://en.wikipedia.org/wiki/The_Good_Place_(season_1).
How can and should technology companies uphold the principles of utilitarianism and contractualism in the creation of new technologies and their overall interactions with society? Do technology companies have enough positive impacts to compensate for their negative impacts? What do technology companies owe to society at large, in terms of advancement and social justice? Should technology and social media companies focus solely on “having fun,” or do they have a responsibility to engage in more social entrepreneurship and equity goals?
-
- 3 min
- Kinolab
- 2016
Eleanor Shellstrop, a deceased selfish woman, ended up in the utopic afterlife The Good Place by mistake after her death. She spins an elaborate web of lies to ensure that she is not sent to be tortured in The Bad Place. In this narrative, her friend and ethics teacher, Chidi, teaches her about the ethical concept of moral character, in which a person controls the good actions that ultimately make them a good person. He implores Eleanor to practice this in building a friendship with Tahani, a pretentious neighbor who Eleanor takes for fake and shallow.
- Kinolab
- 2016
Moral Character, Genuineness, and Appearances
Eleanor Shellstrop, a deceased selfish woman, ended up in the utopic afterlife The Good Place by mistake after her death. She spins an elaborate web of lies to ensure that she is not sent to be tortured in The Bad Place. In this narrative, her friend and ethics teacher, Chidi, teaches her about the ethical concept of moral character, in which a person controls the good actions that ultimately make them a good person. He implores Eleanor to practice this in building a friendship with Tahani, a pretentious neighbor who Eleanor takes for fake and shallow.
Does moral virtue stem from your actions or your intentions? How do the main intentions behind certain digital technologies, for example social networks, compare and contrast with their actions and impacts? How do digital communication channels encourage a certain level of falseness or shallowness? Do digital technologies make it easier to do good in the world?