Human Control of Technology (67)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 4 min
- Kinolab
- 1982
Flynn codes a digital avatar, Clu, in an attempt to hack into the mainframe of ENCOM. However, when Flynn fails to get Clu past the virtual, video-game-like defenses, Clu is captured and violently interrogated by a mysterious figure in the virtual world.
- Kinolab
- 1982
Artificial Intelligence as Servants to Humans
Flynn codes a digital avatar, Clu, in an attempt to hack into the mainframe of ENCOM. However, when Flynn fails to get Clu past the virtual, video-game-like defenses, Clu is captured and violently interrogated by a mysterious figure in the virtual world.
How can we program AI to perform tasks remotely for us? How can AI be used to remotely hack into public or private systems? Does every program designed to complete a task, even programs such as malware, have a life of its own? What are potential consequences to training AI solely to do the bidding of humans?
-
- 8 min
- Kinolab
- 1982
Main Control Program, an Artificial Intelligence presence, has self-developed beyond the imagination of its creators and sets its sights on hacking global governments, including the pentagon. It believes that with its growing intelligence, it can rule better than any human can, and forces the hand of Dillinger, a human, to help move its hacking beyond corporations. Meanwhile, a team of hackers attempt to break into the mainframe of this system. When the rebel hacker Flynn attempts to hack into the mainframe of the MCP, he is drawn into the digital world of the computer which is under the dominion of the MCP. Sark, one of the digital beings who serves the MCP, is tasked with killing Flynn.
- Kinolab
- 1982
Digital Hegemony in the Real and Virtual Worlds
Main Control Program, an Artificial Intelligence presence, has self-developed beyond the imagination of its creators and sets its sights on hacking global governments, including the pentagon. It believes that with its growing intelligence, it can rule better than any human can, and forces the hand of Dillinger, a human, to help move its hacking beyond corporations. Meanwhile, a team of hackers attempt to break into the mainframe of this system. When the rebel hacker Flynn attempts to hack into the mainframe of the MCP, he is drawn into the digital world of the computer which is under the dominion of the MCP. Sark, one of the digital beings who serves the MCP, is tasked with killing Flynn.
Is human anxiety over the potential for super-powered AI justified? Would things truly be better if machines and artificial intelligence made authoritative decisions as global actors and rulers?
What could be the implications of ‘teleporting’ into digital space in terms of alienation from the real world? For now, it seems that humans are in charge of computers in the “real” world; if humans were to enter a digital world, who would be in charge? Do AI beings owe subservience to humans for their creation, given their increasing intelligence?
-
- 14 min
- Kinolab
- 2014
Caleb, a programmer in a large company, is invited by his boss Nathan to test a robot named Ava. During one session of the Turing Test, Ava fearfully interrogates Caleb on what her fate will be if she is deemed not capable or human enough by the results of the test. Caleb struggles to deliver the honest answer, especially given that Ava displays attachment toward him, a sentiment which he returns. After Caleb discovers that Nathan wants to essentially kill Ava, he loops her in to his escape plan, offering her freedom and a chance to live a human life. Once Nathan is killed, Ava goes to his robotics repository and bestows a new physical, humanlike appearance upon herself. She then permanently traps Caleb, the only remaining person who knows she is an android, in Nathan’s compound before escaping to live a human life in the real world.
- Kinolab
- 2014
Liberty, Autonomy, and Desires of Humanoid Robots
Caleb, a programmer in a large company, is invited by his boss Nathan to test a robot named Ava. During one session of the Turing Test, Ava fearfully interrogates Caleb on what her fate will be if she is deemed not capable or human enough by the results of the test. Caleb struggles to deliver the honest answer, especially given that Ava displays attachment toward him, a sentiment which he returns. After Caleb discovers that Nathan wants to essentially kill Ava, he loops her in to his escape plan, offering her freedom and a chance to live a human life. Once Nathan is killed, Ava goes to his robotics repository and bestows a new physical, humanlike appearance upon herself. She then permanently traps Caleb, the only remaining person who knows she is an android, in Nathan’s compound before escaping to live a human life in the real world.
What rights to freedom do AI have? Do sentient AI beings deserve to be at the mercy of their creators? What are the consequences of machines being able to detect and expose lies? Is emotional attachment to AI a valid form of love? What threat could well-disguised, hyper-intelligent AI pose for humanity? If no one knows or can tell the difference, does that matter?
-
- 7 min
- Kinolab
- 1968
Dr. Dave Bowman and Dr. Frank Poole are two astronauts on the mission Discovery to Jupiter. They are joined by HAL, an artificial intelligence machine named after the most recent iteration of his model, the HAL 9000 computer. HAL is seen as just another member of the crew based upon his ability to carry conversations with the other astronauts and his responsibilities for keeping the crew safe.
- Kinolab
- 1968
HAL Part I: AI Camaraderie and Conversation
Dr. Dave Bowman and Dr. Frank Poole are two astronauts on the mission Discovery to Jupiter. They are joined by HAL, an artificial intelligence machine named after the most recent iteration of his model, the HAL 9000 computer. HAL is seen as just another member of the crew based upon his ability to carry conversations with the other astronauts and his responsibilities for keeping the crew safe.
Should humans count on AI entirely to help keep them safe in dangerous situations or environments? Do you agree with Dave’s assessment that one can “never tell” if an AI has real feelings? What counts as “real feelings”? Even if HAL’s human tendencies follow a line of programming, does this make them less real?
-
- 15 min
- Kinolab
- 2016
Westworld, a western-themed amusement park, is populated by realistic robotic creatures known as “hosts” that are designed in a lab and constantly updated to seem as real and organic as possible. These robots then play out scripted “narratives” day after day with the guests of the park with memory being erased on each cycle, allowing the customers to fulfill any dark desire they wish with the hosts. After the new update to the androids, one modeled after a sheriff malfunctions in front of customers while playing out his narrative, causing a debate in the lab over whether making sure the robots are alright and functional is worth the inconvenience to the guests. Lee and Theresa, two workers from the lab, eventually discuss whether or not the updates should continue to make the robots more and more human. Lee is especially skeptical of how the profit motive tends to drive the innovation further and further toward the incorporation of completely human robots into the fantasies of high payers. When the lab team inspects the decommissioned host robot Peter Abernathy, he displays a deep and uncharacteristic concern for his daughter Dolores before going off script and speaking of his contempt toward his creators. This is then dismissed by the lab team as parts of his old programming resurfacing in a combination that made the emotions seem more realistic.
- Kinolab
- 2016
Repetitious Robots and Programming for Human Pleasure
Westworld, a western-themed amusement park, is populated by realistic robotic creatures known as “hosts” that are designed in a lab and constantly updated to seem as real and organic as possible. These robots then play out scripted “narratives” day after day with the guests of the park with memory being erased on each cycle, allowing the customers to fulfill any dark desire they wish with the hosts. After the new update to the androids, one modeled after a sheriff malfunctions in front of customers while playing out his narrative, causing a debate in the lab over whether making sure the robots are alright and functional is worth the inconvenience to the guests. Lee and Theresa, two workers from the lab, eventually discuss whether or not the updates should continue to make the robots more and more human. Lee is especially skeptical of how the profit motive tends to drive the innovation further and further toward the incorporation of completely human robots into the fantasies of high payers. When the lab team inspects the decommissioned host robot Peter Abernathy, he displays a deep and uncharacteristic concern for his daughter Dolores before going off script and speaking of his contempt toward his creators. This is then dismissed by the lab team as parts of his old programming resurfacing in a combination that made the emotions seem more realistic.
How can reality be “mixed” using AI and Robotics for recreational purposes? What might be some consequences to tailoring life-like robots solely to human desires? What are the implications of programming humanoid robots to perform repetitious tasks tirelessly without a break? How does this differ from lines of code which direct our computers to complete one task in a loop? How ‘real’ should a realistic Robot be? Can a robot being ‘too realistic’ be something that should scare or worry us? What determines the ‘realness’ of AI emotions? Is it all just programming, or is there a point at which the emotions become undistinguishable from human emotions?
-
- 8 min
- Kinolab
- 2016
Westworld, a western-themed amusement park, is populated by realistic robotic creatures known as “hosts” that are designed in a lab and constantly updated to seem as real and organic as possible. One of these hosts, Maeve, is programmed to be a prostitute who runs the same narrative every single day with the same personality. However, during one loop, she has a “flashback” to a prior memory from a time when her programming contained a different narrative role. After the Flashback glitch, Maeve is taken to the lab and reprogrammed before being returned to her role as a prostitute to fulfill the desires of the guests of the park. During this re-programming, it is revealed that robots can conceptualise dreams and nightmares, cobbled together of old memories. During a maintenance check, Maeve is accidentally left on, and escapes the operating room to discover the lab outside of Westworld and the other robots.
- Kinolab
- 2016
Maeve Part I: Sex, Sentience, and Subservience of Humanoid Entertainment Robots
Westworld, a western-themed amusement park, is populated by realistic robotic creatures known as “hosts” that are designed in a lab and constantly updated to seem as real and organic as possible. One of these hosts, Maeve, is programmed to be a prostitute who runs the same narrative every single day with the same personality. However, during one loop, she has a “flashback” to a prior memory from a time when her programming contained a different narrative role. After the Flashback glitch, Maeve is taken to the lab and reprogrammed before being returned to her role as a prostitute to fulfill the desires of the guests of the park. During this re-programming, it is revealed that robots can conceptualise dreams and nightmares, cobbled together of old memories. During a maintenance check, Maeve is accidentally left on, and escapes the operating room to discover the lab outside of Westworld and the other robots.
Could AI develop to have a subconscious, or should they be limited to only doing what humans instruct them to do? What if robots did gain a sense or a repulsion to the way in which humans treat them? How does the nature of ‘sex’ change when robots are used instead of humans? What are the ethical issues of creating a humanoid robot who is designed to give consent regardless of treatment? Can AI contemplate their own mortality? At that point, do they become ‘living beings’? If robots are only programmed for certain scenarios or certain personalities, what happens if they break free of that false reality?