Film Clip (143)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 9 min
- Kinolab
- 2017
In a short vignette told by a museum curator, a doctor known as Dawson devises a brain-computer interface device which can allow him to feel the physical sensations of patients in order to deliver a quicker diagnosis. However, his ownership of this technology ends up bizarrely shaping his psychology, putting himself and others in danger.
- Kinolab
- 2017
Consequences of Digital Hyperempathy
In a short vignette told by a museum curator, a doctor known as Dawson devises a brain-computer interface device which can allow him to feel the physical sensations of patients in order to deliver a quicker diagnosis. However, his ownership of this technology ends up bizarrely shaping his psychology, putting himself and others in danger.
How should technology which allows us to feel the sensations of others be regulated? What are the pros and cons of hyperempathy technology? How can autonomy over one’s own body be assured when technology like this exists?
-
- 6 min
- Kinolab
- 2017
After his wife Carrie dies, Jack originally has her consciousness uploaded to his own brain as code. Once this solution is deemed unworkable, he has the coding of her consciousness transferred into a digital monkey toy which is gifted to their son Parker so that Carrie can continue to spend time with him. However, Carrie can only communicate in a binary manner, having access to only 2 phrases to express happiness or unhappiness.
- Kinolab
- 2017
Digitally Immortal Vessels and Eternity
After his wife Carrie dies, Jack originally has her consciousness uploaded to his own brain as code. Once this solution is deemed unworkable, he has the coding of her consciousness transferred into a digital monkey toy which is gifted to their son Parker so that Carrie can continue to spend time with him. However, Carrie can only communicate in a binary manner, having access to only 2 phrases to express happiness or unhappiness.
How can developers of digital immortality technology ensure that it is ethical from the get-go? Can something like this ever be “piloted” when lives are at stake? How can people ensure that digital lives do not last for true eternity, especially if those existences are mundane? How can humans keep control of their existences in concepts such as this?
-
- 11 min
- Kinolab
- 2017
Museum curator Rolo shows off the exhibition of Clayton, a former death row inmate whose consciousness became digital during one of Rolo’s experiments. Despite evidence of his innocence, Clayton was put to death, and his digitally immortal consciousness was subjected to torture inside Rolo’s museum, with guests being able to simulate the electric chair shock on the holographic Clayton and eventually putting him in a conscious but vegetative state. Clayton’s daughter Nish shows up to settle the score, trapping Rolo in an eternal state of torture in a small digital device.
- Kinolab
- 2017
Technological Tortures and Traps
Museum curator Rolo shows off the exhibition of Clayton, a former death row inmate whose consciousness became digital during one of Rolo’s experiments. Despite evidence of his innocence, Clayton was put to death, and his digitally immortal consciousness was subjected to torture inside Rolo’s museum, with guests being able to simulate the electric chair shock on the holographic Clayton and eventually putting him in a conscious but vegetative state. Clayton’s daughter Nish shows up to settle the score, trapping Rolo in an eternal state of torture in a small digital device.
How can one protect their digital consciousness after they pass away? Can anyone ever be fully trusted to handle codes or programs that represent someone else’s existence or consciousness? How does the existence of racial bias and violence make the concept of eternal digital consciousnesses far more harrowing?
-
- 15 min
- Kinolab
- 2017
In a world in which the program Coach determines the pairing and duration of romantic matches, Frank and Amy managed to be matched more than once and eventually fall in love after failed matches with other people. After Frank breaks a promise to Amy by checking the expiry date that is automatically assigned to all relationships, they temporarily break up. After a reunion, they set out to discover the truth of their reality and the meaning of their match.
- Kinolab
- 2017
Online Dating Algorithms
In a world in which the program Coach determines the pairing and duration of romantic matches, Frank and Amy managed to be matched more than once and eventually fall in love after failed matches with other people. After Frank breaks a promise to Amy by checking the expiry date that is automatically assigned to all relationships, they temporarily break up. After a reunion, they set out to discover the truth of their reality and the meaning of their match.
Should machine learning algorithms, even the most sophisticated ones, be trusted when it comes to deeply emotional matters like love? Can simulations and algorithms account for everything when it comes to a person’s experience of love? How could algorithmic bias which is present in real-life matching programs enter the virtual reality system shown here? How can advanced simulations be distinguished from reality? Has the digital age moved the dating experience firmly past the “old days” of falling in love, and should this be embraced?
-
- 30 min
- CNET, New York Times, Gizmodo
- 2023
On May 16, 2023, OpenAI CEO Sam Altman testified in front of Congress on the potential harms of AI and how it ought to be regulated in the future, especially concerning new tools such as ChatGPT and voice imitators.
After watching the CNET video of the top moments from the hearing, read the Gizmodo overview of the hearing and read the associated New York Times article last. All resources highlight the need for governmental intervention to hold companies who generate AI products accountable, especially in the wake of a lack of totally effective congressional action on social media companies. While misinformation and deepfake has been a concern among politicians since the advent of social media, additional new concerns such as a new wave of job loss and crediting artists are raised in the hearing.
- CNET, New York Times, Gizmodo
- 2023
The ChatGPT Congressional Hearing
On May 16, 2023, OpenAI CEO Sam Altman testified in front of Congress on the potential harms of AI and how it ought to be regulated in the future, especially concerning new tools such as ChatGPT and voice imitators.
After watching the CNET video of the top moments from the hearing, read the Gizmodo overview of the hearing and read the associated New York Times article last. All resources highlight the need for governmental intervention to hold companies who generate AI products accountable, especially in the wake of a lack of totally effective congressional action on social media companies. While misinformation and deepfake has been a concern among politicians since the advent of social media, additional new concerns such as a new wave of job loss and crediting artists are raised in the hearing.
If you were in the position of the congresspeople in the hearing, what questions would you ask Sam Altman? Does Sam Altman put too much of the onus of ethical regulation on the government? How would the “license” approach apply to AI companies that already exist/have released popular products? Do you believe Congress might still be able to “meet the moment” on AI?