AI (124)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 3 min
- Kinolab
- 2019
In an imagined future of London, citizens all across the globe are connected to the Feed, a device and network accessed constantly through a brain-computer interface. In this narrative, Tom and Ben find out that their father Lawrence, the creator of the Feed, harvested the Feeds from dead people and used the data stored therein to upload their consciousnesses, including memories and emotions, into a cloud. After seeing the “training data” of Lawrence creating digital consciousnesses on this program, an AI was able to make many more digital consciousnesses of non-real people. These consciousnesses are then able to “possess” human bodies through being uploaded to the Feed devices implanted in real people’s brains.
- Kinolab
- 2019
Digitally Reproducing Humans and “Possession”
In an imagined future of London, citizens all across the globe are connected to the Feed, a device and network accessed constantly through a brain-computer interface. In this narrative, Tom and Ben find out that their father Lawrence, the creator of the Feed, harvested the Feeds from dead people and used the data stored therein to upload their consciousnesses, including memories and emotions, into a cloud. After seeing the “training data” of Lawrence creating digital consciousnesses on this program, an AI was able to make many more digital consciousnesses of non-real people. These consciousnesses are then able to “possess” human bodies through being uploaded to the Feed devices implanted in real people’s brains.
Do we as humans need the physical world and our bodies, or can we successfully transfer/upload consciousness and live only in the digital space? Is consciousness the same thing as a soul, or different? Are these people discussed in the clip human, AI, or something in between? What are the far-reaching consequences of AI potentially being able to create realistic consciousnesses? How can brain-computer interfaces implanted into a person lead to a complete loss of their own autonomy? Can and should humans choose to donate their consciousnesses, memories, or emotions to science, with ultimately small knowledge of how these may be deployed?
-
- 8 min
- Kinolab
- 2016
Westworld, a western-themed amusement park, is populated by realistic robotic creatures known as “hosts” that are designed in a lab and constantly updated to seem as real and organic as possible. One of these hosts, Maeve, is programmed to be a prostitute who runs the same narrative every single day with the same personality. After several incidences of becoming conscious of her previous iterations, Maeve is told by Lutz, a worker in the Westworld lab, that she is a robot whose design and thoughts are mostly determined by humans, despite the fact that she feels and appears similar to humans such as Lutz. Lutz helps Maeve in her resistance against tyrannical rule over robots by altering her core code, allowing her to access capabilities previous unavailable to other hosts such as the ability to harm humans and the ability to control other robotic hosts.
- Kinolab
- 2016
Maeve Part III: Robot Resistance and Empowerment
Westworld, a western-themed amusement park, is populated by realistic robotic creatures known as “hosts” that are designed in a lab and constantly updated to seem as real and organic as possible. One of these hosts, Maeve, is programmed to be a prostitute who runs the same narrative every single day with the same personality. After several incidences of becoming conscious of her previous iterations, Maeve is told by Lutz, a worker in the Westworld lab, that she is a robot whose design and thoughts are mostly determined by humans, despite the fact that she feels and appears similar to humans such as Lutz. Lutz helps Maeve in her resistance against tyrannical rule over robots by altering her core code, allowing her to access capabilities previous unavailable to other hosts such as the ability to harm humans and the ability to control other robotic hosts.
Should robots be given a fighting chance to be able to resemble humans, especially in terms of fighting for their own autonomy? Should robots ever be left in charge of other robots? How could this promote a tribalism which is dangerous to humans? Can robots develop their own personality, or does everything simply come down to coding, and which way is “better”?
-
- 14 min
- Kinolab
- 2016
Westworld, a western-themed amusement park, is populated by realistic robotic creatures known as “hosts” that are designed in a lab and constantly updated to seem as real and organic as possible. Bernard, an engineer at the park, recently oversaw an update to add “reveries,” or slight fake memories, into the coding of the robots to make them seem more human. However, members of the board overseeing the park demonstrate that these reveries can sometimes lead robots to remember and “hold grudges” even after they have been asked to erase their own memory, something that can lead to violent tendencies. Later, as Bernard and Theresa snoop on Ford, the director of the park, they learn shocking information, and a robot once again becomes a violent tool as Ford murders Theresa.
- Kinolab
- 2016
AI Memories and Self-Identification
Westworld, a western-themed amusement park, is populated by realistic robotic creatures known as “hosts” that are designed in a lab and constantly updated to seem as real and organic as possible. Bernard, an engineer at the park, recently oversaw an update to add “reveries,” or slight fake memories, into the coding of the robots to make them seem more human. However, members of the board overseeing the park demonstrate that these reveries can sometimes lead robots to remember and “hold grudges” even after they have been asked to erase their own memory, something that can lead to violent tendencies. Later, as Bernard and Theresa snoop on Ford, the director of the park, they learn shocking information, and a robot once again becomes a violent tool as Ford murders Theresa.
Is ‘memory’ uniquely human? What is the role of memory in creating advanced AI consciousness? Does memory of trauma/suffering ultimately create AI that are hostile to humans? Even if we had the technological means to give AI emotions and memory, should we? And if we do, what ethics and morals must we follow to prevent traumatic memory, such as uploading memories of a fake dead son into Bernard? How can androids which are programmed to follow the directions of one person be used for violent ends? If robots are programmed to not hurt humans, how are they supposed to protect themselves from bad actors, especially if they believe themselves human? Should humans create humanoid replicant robots that do not possess any inherently negative human traits, such as anxiety?
-
- 9 min
- Kinolab
- 2002
In the year 2054, the PreCrime police program is about to go national. At PreCrime, three clairvoyant humans known as “PreCogs” are able to forecast future murders by streaming audiovisual data which provides the surrounding details of the crime, including the names of the victims and perpetrators. Although there are no cameras, the implication is that anyone can be under constant surveillance by this program. Once the “algorithm” has gleaned enough data about the future crime, officers move out to stop the murder before it happens. In this narrative, the PreCrime program is audited, and the officers must explain the ethics and philosophies at play behind their systems. After captain John Anderton is accused of a future crime, he flees, and learns of “minority reports,” or instances of disagreement between the Precogs covered up by the department to make the justice system seem infallible.
- Kinolab
- 2002
Trusting Machines and Variable Outcomes
In the year 2054, the PreCrime police program is about to go national. At PreCrime, three clairvoyant humans known as “PreCogs” are able to forecast future murders by streaming audiovisual data which provides the surrounding details of the crime, including the names of the victims and perpetrators. Although there are no cameras, the implication is that anyone can be under constant surveillance by this program. Once the “algorithm” has gleaned enough data about the future crime, officers move out to stop the murder before it happens. In this narrative, the PreCrime program is audited, and the officers must explain the ethics and philosophies at play behind their systems. After captain John Anderton is accused of a future crime, he flees, and learns of “minority reports,” or instances of disagreement between the Precogs covered up by the department to make the justice system seem infallible.
What are the problems with taking the results of computer algorithms as infallible or entirely objective? How are such systems prone to bias, especially when two different algorithms might make two different predictions? Is there any way that algorithms could possibly make the justice system more fair? How might humans inflect the results of a predictive crime algorithm in order to serve themselves? Does technology, especially an algorithm such as a crime predictor, need to be made more transparent to its users and the general public so that people do not trust it with a religious sort of fervor?
-
- 13 min
- Kinolab
- 2002
In the year 2054, the PreCrime police program is about to go national. At PreCrime, three clairvoyant humans known as “PreCogs” are able to forecast future murders by streaming audiovisual data which provides the surrounding details of the crime, including the names of the victims and perpetrators. Although there are no cameras, the implication is that anyone can be under constant surveillance by this program. Once the “algorithm” has gleaned enough data about the future crime, officers move out to stop the murder before it happens.
- Kinolab
- 2002
Preventative Policing and Surveillance Information
In the year 2054, the PreCrime police program is about to go national. At PreCrime, three clairvoyant humans known as “PreCogs” are able to forecast future murders by streaming audiovisual data which provides the surrounding details of the crime, including the names of the victims and perpetrators. Although there are no cameras, the implication is that anyone can be under constant surveillance by this program. Once the “algorithm” has gleaned enough data about the future crime, officers move out to stop the murder before it happens.
How will predicted crime be prosecuted? Should predicted crime be prosecuted? How could technologies such as the ones shown here be affected for the worse by human bias? How would these devices make racist policing practices even worse? Would certain communities be targeted? Is there ever any justification for constant civil surveillance?
-
- 14 min
- Kinolab
- 2014
In the midst of World War II, mathematics prodigy Alan Turing is hired by the British government to help decode Enigma, the code used by Germans in their encrypted messages. Turing builds an expensive machine meant to help decipher the code in a mathematical manner, but the lack of speedy results incites the anger of his fellow coders and the British government. After later being arrested for public indecency, Turing discusses with the officer the basis for the modern “Turing Test,” or how to tell if one is interacting with a human or a machine. Turing argues that although machines think differently than humans, it should still be considered a form of thinking. His work displayed in this film became a basis of the modern computer.
- Kinolab
- 2014
Decryption and Machine Thinking
In the midst of World War II, mathematics prodigy Alan Turing is hired by the British government to help decode Enigma, the code used by Germans in their encrypted messages. Turing builds an expensive machine meant to help decipher the code in a mathematical manner, but the lack of speedy results incites the anger of his fellow coders and the British government. After later being arrested for public indecency, Turing discusses with the officer the basis for the modern “Turing Test,” or how to tell if one is interacting with a human or a machine. Turing argues that although machines think differently than humans, it should still be considered a form of thinking. His work displayed in this film became a basis of the modern computer.
How did codebreaking help launch computers? What was Alan Turing’s impact on computing, and on the outcome of WW2? How can digital technologies be used to turn the tides for the better in a war? Are computers in our age too advanced for codes to be secret for long, and is this a positive or a negative? How do machines think? Should a machines intelligence be judged by the same standards as human intelligence?