Ways in which AI intelligence can match and even surpass human scope for knowledge and self-sustain and ‘reproduce’.
Singularity (26)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 7 min
- Kinolab
- 2017
In his hunt for a missing android child, the robot police officer K visits Dr. Ana Stelline to determine if a memory of his own from his childhood is real or fabricated. Dr. Stelline is in the business of creating false memories to implant into robot’s heads in order to make them seem more believably human. She argues that having memories to lean on as one experiences the world is a cornerstone of the human experience.
- Kinolab
- 2017
Fabricated Memories and Believability
In his hunt for a missing android child, the robot police officer K visits Dr. Ana Stelline to determine if a memory of his own from his childhood is real or fabricated. Dr. Stelline is in the business of creating false memories to implant into robot’s heads in order to make them seem more believably human. She argues that having memories to lean on as one experiences the world is a cornerstone of the human experience.
Do you agree with Dr. Stelline’s assessment in this narrative? Does achieving singularity through fabricated memories truly make AI any more human, or would they still be nothing more than replicants? Conversely, what are the issues raised by uploading authentic human memories into a robot? How does that affect agency and identity of real humans, on small and large scales? What makes us human, if not our memories and consciousness, and if AI have that as well, do they achieve personhood? How do digital technologies already abstract the concept of memory, and can that be extended any further or not?
-
- 10 min
- Kinolab
- 2017
In the year 2049, humanoid robots known as “replicants” work as slave laborers in various space colonies for humankind. “Blade Runners,” like K shown here, are specialized police officers who are tasked with tracking down and killing escaped robots. Throughout the years, models have been getting more advanced and human-like, which is one of the reasons K, a newest model of replicant, is tasked to kill the farmer, an older model. The ultimate goal of corporate villain CEO Niander Wallace is to create replicants which can reproduce exactly has humans can, essentially becoming an infinite resource of human labor. He sees the newest “Angel” model as being the key to this.
- Kinolab
- 2017
Robot Expendability and Labor
In the year 2049, humanoid robots known as “replicants” work as slave laborers in various space colonies for humankind. “Blade Runners,” like K shown here, are specialized police officers who are tasked with tracking down and killing escaped robots. Throughout the years, models have been getting more advanced and human-like, which is one of the reasons K, a newest model of replicant, is tasked to kill the farmer, an older model. The ultimate goal of corporate villain CEO Niander Wallace is to create replicants which can reproduce exactly has humans can, essentially becoming an infinite resource of human labor. He sees the newest “Angel” model as being the key to this.
If robots are created to essentially live human lives, can they simply be destroyed once their model is outdated and something newer comes along? Are AI entitled to compensation and reward for any labor they complete, especially if they experience sensations in a way similar to humans? If AI are minding their own business and not harming anyone, do they need to be eliminated? Who can prevent corporations from using humanoid robots as unpaid laborers, and how? Should robots ever be forced to destroy their own kind?
-
- 13 min
- Kinolab
- 2020
George Almore is an engineer working with a company which hopes to achieve singularity with robots, making their artificial intelligence one step above real humans. In doing this, he works with three prototypes: J1, J2, and J3, each one more advanced than the last. Simultaneously, he plans to upload his dead wife’s consciousness into the J3 robot in order to extend her life. The narrative begins with him explaining his goal to J3 as he has this robot go through taste and emotion tests. Eventually, J3 has evolved into a humanoid robot who takes on the traits of George’s wife, leaving the earlier two versions, who all have a sibling-like bond with each other, feeling neglected.
- Kinolab
- 2020
Prototypes, Evolution, and Replacement with Robots
George Almore is an engineer working with a company which hopes to achieve singularity with robots, making their artificial intelligence one step above real humans. In doing this, he works with three prototypes: J1, J2, and J3, each one more advanced than the last. Simultaneously, he plans to upload his dead wife’s consciousness into the J3 robot in order to extend her life. The narrative begins with him explaining his goal to J3 as he has this robot go through taste and emotion tests. Eventually, J3 has evolved into a humanoid robot who takes on the traits of George’s wife, leaving the earlier two versions, who all have a sibling-like bond with each other, feeling neglected.
Are taste and emotion examples of necessary elements of creating advanced AI? If so, why? What good does having these abilities serve in terms of the AI’s relationship to the human world? Is it right to transfer consciousness or elements of consciousness from a deceased person into one or several AI? In the AI, how much is too much similarity to a pre-existing person? Can total similarity ever be achieved, and how? Can advanced AI feel negative human emotions and face mental health problems such as depression? Is it ethical to program AI to feel such emotions, knowing the risks associated with them, including bonding with former or flawed prototypes of itself? If an AI kills itself, does the onus fall on the machine or the human creator?
-
- 3 min
- Kinolab
- 2016
Westworld, a western-themed amusement park, is populated by realistic robotic creatures known as “hosts” that are designed in a lab and constantly updated to seem as real and organic as possible. Bernard, a humanoid robot who previously believed himself to be a regular human, questions his maker, Ford, on what makes him different from humans, to which Ford replies that the line is very thin and arbitrary.
- Kinolab
- 2016
Robot Consciousness
Westworld, a western-themed amusement park, is populated by realistic robotic creatures known as “hosts” that are designed in a lab and constantly updated to seem as real and organic as possible. Bernard, a humanoid robot who previously believed himself to be a regular human, questions his maker, Ford, on what makes him different from humans, to which Ford replies that the line is very thin and arbitrary.
Why do humans cling to ‘consciousness’ as the thing that separates us from advanced machines? Is consciousness real or imagined, and if it is constructed in the mind, can it be replicated in AI’s ‘mind programming’? Would that be a same or different kind of consciousness? Should robots be given the capability for consciousness or self-actualization if that leads to tangible pain, for example in the form of a tragic backstory? If robots are to have consciousness, do they need to be able to essentially act like a human in every other way?
-
- 8 min
- Kinolab
- 2016
Westworld, a western-themed amusement park, is populated by realistic robotic creatures known as “hosts” that are designed in a lab and constantly updated to seem as real and organic as possible. One of these hosts, Maeve, is programmed to be a prostitute who runs the same narrative every single day with the same personality. After several incidences of becoming conscious of her previous iterations, Maeve is told by Lutz, a worker in the Westworld lab, that she is a robot whose design and thoughts are mostly determined by humans, despite the fact that she feels and appears similar to humans such as Lutz. Lutz helps Maeve in her resistance against tyrannical rule over robots by altering her core code, allowing her to access capabilities previous unavailable to other hosts such as the ability to harm humans and the ability to control other robotic hosts.
- Kinolab
- 2016
Maeve Part III: Robot Resistance and Empowerment
Westworld, a western-themed amusement park, is populated by realistic robotic creatures known as “hosts” that are designed in a lab and constantly updated to seem as real and organic as possible. One of these hosts, Maeve, is programmed to be a prostitute who runs the same narrative every single day with the same personality. After several incidences of becoming conscious of her previous iterations, Maeve is told by Lutz, a worker in the Westworld lab, that she is a robot whose design and thoughts are mostly determined by humans, despite the fact that she feels and appears similar to humans such as Lutz. Lutz helps Maeve in her resistance against tyrannical rule over robots by altering her core code, allowing her to access capabilities previous unavailable to other hosts such as the ability to harm humans and the ability to control other robotic hosts.
Should robots be given a fighting chance to be able to resemble humans, especially in terms of fighting for their own autonomy? Should robots ever be left in charge of other robots? How could this promote a tribalism which is dangerous to humans? Can robots develop their own personality, or does everything simply come down to coding, and which way is “better”?
-
- 14 min
- Kinolab
- 2016
Westworld, a western-themed amusement park, is populated by realistic robotic creatures known as “hosts” that are designed in a lab and constantly updated to seem as real and organic as possible. Bernard, an engineer at the park, recently oversaw an update to add “reveries,” or slight fake memories, into the coding of the robots to make them seem more human. However, members of the board overseeing the park demonstrate that these reveries can sometimes lead robots to remember and “hold grudges” even after they have been asked to erase their own memory, something that can lead to violent tendencies. Later, as Bernard and Theresa snoop on Ford, the director of the park, they learn shocking information, and a robot once again becomes a violent tool as Ford murders Theresa.
- Kinolab
- 2016
AI Memories and Self-Identification
Westworld, a western-themed amusement park, is populated by realistic robotic creatures known as “hosts” that are designed in a lab and constantly updated to seem as real and organic as possible. Bernard, an engineer at the park, recently oversaw an update to add “reveries,” or slight fake memories, into the coding of the robots to make them seem more human. However, members of the board overseeing the park demonstrate that these reveries can sometimes lead robots to remember and “hold grudges” even after they have been asked to erase their own memory, something that can lead to violent tendencies. Later, as Bernard and Theresa snoop on Ford, the director of the park, they learn shocking information, and a robot once again becomes a violent tool as Ford murders Theresa.
Is ‘memory’ uniquely human? What is the role of memory in creating advanced AI consciousness? Does memory of trauma/suffering ultimately create AI that are hostile to humans? Even if we had the technological means to give AI emotions and memory, should we? And if we do, what ethics and morals must we follow to prevent traumatic memory, such as uploading memories of a fake dead son into Bernard? How can androids which are programmed to follow the directions of one person be used for violent ends? If robots are programmed to not hurt humans, how are they supposed to protect themselves from bad actors, especially if they believe themselves human? Should humans create humanoid replicant robots that do not possess any inherently negative human traits, such as anxiety?