All Narratives (328)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 3 min
- Kinolab
- 2016
Eleanor Shellstrop, a deceased selfish woman, ended up in the utopic afterlife The Good Place by mistake after her death. She spins an elaborate web of lies to ensure that she is not sent to be tortured in The Bad Place. In this narrative, her friend and ethics teacher, Chidi, teaches her about the ethical concept of moral character, in which a person controls the good actions that ultimately make them a good person. He implores Eleanor to practice this in building a friendship with Tahani, a pretentious neighbor who Eleanor takes for fake and shallow.
- Kinolab
- 2016
Moral Character, Genuineness, and Appearances
Eleanor Shellstrop, a deceased selfish woman, ended up in the utopic afterlife The Good Place by mistake after her death. She spins an elaborate web of lies to ensure that she is not sent to be tortured in The Bad Place. In this narrative, her friend and ethics teacher, Chidi, teaches her about the ethical concept of moral character, in which a person controls the good actions that ultimately make them a good person. He implores Eleanor to practice this in building a friendship with Tahani, a pretentious neighbor who Eleanor takes for fake and shallow.
Does moral virtue stem from your actions or your intentions? How do the main intentions behind certain digital technologies, for example social networks, compare and contrast with their actions and impacts? How do digital communication channels encourage a certain level of falseness or shallowness? Do digital technologies make it easier to do good in the world?
-
- 7 min
- Kinolab
- 2016
Eleanor Shellstrop, a deceased selfish woman, ended up in the utopic afterlife The Good Place by mistake after her death. She spins an elaborate web of lies to ensure that she is not sent to be tortured in The Bad Place. In this narrative, her friend and ethics teacher, Chidi, teaches her about the ethical concepts of utilitarianism, or providing for as much net good impact as possible, and contractualism, or reciprocally upholding promises. For more overall context on the plotting of the series, see the Wikipedia page for Season One: https://en.wikipedia.org/wiki/The_Good_Place_(season_1).
- Kinolab
- 2016
Utilitarianism and Contractualism
Eleanor Shellstrop, a deceased selfish woman, ended up in the utopic afterlife The Good Place by mistake after her death. She spins an elaborate web of lies to ensure that she is not sent to be tortured in The Bad Place. In this narrative, her friend and ethics teacher, Chidi, teaches her about the ethical concepts of utilitarianism, or providing for as much net good impact as possible, and contractualism, or reciprocally upholding promises. For more overall context on the plotting of the series, see the Wikipedia page for Season One: https://en.wikipedia.org/wiki/The_Good_Place_(season_1).
How can and should technology companies uphold the principles of utilitarianism and contractualism in the creation of new technologies and their overall interactions with society? Do technology companies have enough positive impacts to compensate for their negative impacts? What do technology companies owe to society at large, in terms of advancement and social justice? Should technology and social media companies focus solely on “having fun,” or do they have a responsibility to engage in more social entrepreneurship and equity goals?
-
- 8 min
- Kinolab
- 2016
Eleanor Shellstrop, a deceased selfish woman, ended up in the utopic afterlife The Good Place by mistake after her death. She spins an elaborate web of lies to ensure that she is not sent to be tortured in The Bad Place. In this narrative, she attempts to prevent Michael, the ruler of The Good Place, from being sent to the torture chambers by murdering Janet, the robotic assistant of the good place. However, Eleanor and her companions have a harder time murdering Janet than they had prepared for thanks to her quite realistic begging for her life.
- Kinolab
- 2016
Murder of Robots and Honesty
Eleanor Shellstrop, a deceased selfish woman, ended up in the utopic afterlife The Good Place by mistake after her death. She spins an elaborate web of lies to ensure that she is not sent to be tortured in The Bad Place. In this narrative, she attempts to prevent Michael, the ruler of The Good Place, from being sent to the torture chambers by murdering Janet, the robotic assistant of the good place. However, Eleanor and her companions have a harder time murdering Janet than they had prepared for thanks to her quite realistic begging for her life.
How can robots be programmed to manipulate emotional responses from humans? Is the act committed in this narrative “murder”? Is there ever any such thing as a victimless lie? How has true honesty become harder in the digital age? Is it ethical to decommission older versions of humanoid robots as newer ones come along? Is this evolution in its own right?
-
- 9 min
- Kinolab
- 2002
In the year 2054, the PreCrime police program is about to go national. At PreCrime, three clairvoyant humans known as “PreCogs” are able to forecast future murders by streaming audiovisual data which provides the surrounding details of the crime, including the names of the victims and perpetrators. Although there are no cameras, the implication is that anyone can be under constant surveillance by this program. Once the “algorithm” has gleaned enough data about the future crime, officers move out to stop the murder before it happens. In this narrative, the PreCrime program is audited, and the officers must explain the ethics and philosophies at play behind their systems. After captain John Anderton is accused of a future crime, he flees, and learns of “minority reports,” or instances of disagreement between the Precogs covered up by the department to make the justice system seem infallible.
- Kinolab
- 2002
Trusting Machines and Variable Outcomes
In the year 2054, the PreCrime police program is about to go national. At PreCrime, three clairvoyant humans known as “PreCogs” are able to forecast future murders by streaming audiovisual data which provides the surrounding details of the crime, including the names of the victims and perpetrators. Although there are no cameras, the implication is that anyone can be under constant surveillance by this program. Once the “algorithm” has gleaned enough data about the future crime, officers move out to stop the murder before it happens. In this narrative, the PreCrime program is audited, and the officers must explain the ethics and philosophies at play behind their systems. After captain John Anderton is accused of a future crime, he flees, and learns of “minority reports,” or instances of disagreement between the Precogs covered up by the department to make the justice system seem infallible.
What are the problems with taking the results of computer algorithms as infallible or entirely objective? How are such systems prone to bias, especially when two different algorithms might make two different predictions? Is there any way that algorithms could possibly make the justice system more fair? How might humans inflect the results of a predictive crime algorithm in order to serve themselves? Does technology, especially an algorithm such as a crime predictor, need to be made more transparent to its users and the general public so that people do not trust it with a religious sort of fervor?
-
- 7 min
- Kinolab
- 2002
In the year 2054, the PreCrime police program is about to go national. At PreCrime, three clairvoyant humans known as “PreCogs” are able to forecast future murders by streaming audiovisual data which provides the surrounding details of the crime, including the names of the victims and perpetrators. Joe Anderson, the former head of the PreCrime policing program, is named as a future perpetrator and must flee from his former employer. Due to the widespread nature of retinal scanning biometric technology, he is found quickly, and thus must undergo an eye transplant. While recovering in a run-down apartment, the PreCrime officers deploy spider-shaped drones to scan the retinas of everyone in the building.
- Kinolab
- 2002
Retinal Scans and Immediate Identification
In the year 2054, the PreCrime police program is about to go national. At PreCrime, three clairvoyant humans known as “PreCogs” are able to forecast future murders by streaming audiovisual data which provides the surrounding details of the crime, including the names of the victims and perpetrators. Joe Anderson, the former head of the PreCrime policing program, is named as a future perpetrator and must flee from his former employer. Due to the widespread nature of retinal scanning biometric technology, he is found quickly, and thus must undergo an eye transplant. While recovering in a run-down apartment, the PreCrime officers deploy spider-shaped drones to scan the retinas of everyone in the building.
Is it possible that people would consent to having their retinas scanned in general public places if it meant a more personalized experience of that space? Should government be able to deceive people into giving up their private data, as social media companies already do? How can people protect themselves from retinal scanning and other biometric identification technologies on small and large scales?
-
- 14 min
- Kinolab
- 2014
In the midst of World War II, mathematics prodigy Alan Turing is hired by the British government to help decode Enigma, the code used by Germans in their encrypted messages. Turing builds an expensive machine meant to help decipher the code in a mathematical manner, but the lack of speedy results incites the anger of his fellow coders and the British government. After later being arrested for public indecency, Turing discusses with the officer the basis for the modern “Turing Test,” or how to tell if one is interacting with a human or a machine. Turing argues that although machines think differently than humans, it should still be considered a form of thinking. His work displayed in this film became a basis of the modern computer.
- Kinolab
- 2014
Decryption and Machine Thinking
In the midst of World War II, mathematics prodigy Alan Turing is hired by the British government to help decode Enigma, the code used by Germans in their encrypted messages. Turing builds an expensive machine meant to help decipher the code in a mathematical manner, but the lack of speedy results incites the anger of his fellow coders and the British government. After later being arrested for public indecency, Turing discusses with the officer the basis for the modern “Turing Test,” or how to tell if one is interacting with a human or a machine. Turing argues that although machines think differently than humans, it should still be considered a form of thinking. His work displayed in this film became a basis of the modern computer.
How did codebreaking help launch computers? What was Alan Turing’s impact on computing, and on the outcome of WW2? How can digital technologies be used to turn the tides for the better in a war? Are computers in our age too advanced for codes to be secret for long, and is this a positive or a negative? How do machines think? Should a machines intelligence be judged by the same standards as human intelligence?