Possibility of technologies such as AI developing human emotions and questions of AI rights
AI Emotions and Rights (37)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 5 min
- The Atlantic
- 2019
A book proposes we let robots do all of Earth’s physical labor, creating a world where virtually all human needs are met, in this new ideology called Fully Automated Luxury Communism, or FALC. This is modeled after certain fictions such as Star Trek.
- The Atlantic
- 2019
-
- 5 min
- The Atlantic
- 2019
Give Us Fully Automated Luxury Communism
A book proposes we let robots do all of Earth’s physical labor, creating a world where virtually all human needs are met, in this new ideology called Fully Automated Luxury Communism, or FALC. This is modeled after certain fictions such as Star Trek.
Do you think the FALC ideology is tangibly possible? Which potential problems do you see in its implementation? If robots become advanced enough to become what are essentially human actors, can they be expected to complete all of the world’s labor without compensation?
-
- 4 min
- Kinolab
- 1982
Flynn codes a digital avatar, Clu, in an attempt to hack into the mainframe of ENCOM. However, when Flynn fails to get Clu past the virtual, video-game-like defenses, Clu is captured and violently interrogated by a mysterious figure in the virtual world.
- Kinolab
- 1982
Artificial Intelligence as Servants to Humans
Flynn codes a digital avatar, Clu, in an attempt to hack into the mainframe of ENCOM. However, when Flynn fails to get Clu past the virtual, video-game-like defenses, Clu is captured and violently interrogated by a mysterious figure in the virtual world.
How can we program AI to perform tasks remotely for us? How can AI be used to remotely hack into public or private systems? Does every program designed to complete a task, even programs such as malware, have a life of its own? What are potential consequences to training AI solely to do the bidding of humans?
-
- 8 min
- Kinolab
- 1982
Main Control Program, an Artificial Intelligence presence, has self-developed beyond the imagination of its creators and sets its sights on hacking global governments, including the pentagon. It believes that with its growing intelligence, it can rule better than any human can, and forces the hand of Dillinger, a human, to help move its hacking beyond corporations. Meanwhile, a team of hackers attempt to break into the mainframe of this system. When the rebel hacker Flynn attempts to hack into the mainframe of the MCP, he is drawn into the digital world of the computer which is under the dominion of the MCP. Sark, one of the digital beings who serves the MCP, is tasked with killing Flynn.
- Kinolab
- 1982
Digital Hegemony in the Real and Virtual Worlds
Main Control Program, an Artificial Intelligence presence, has self-developed beyond the imagination of its creators and sets its sights on hacking global governments, including the pentagon. It believes that with its growing intelligence, it can rule better than any human can, and forces the hand of Dillinger, a human, to help move its hacking beyond corporations. Meanwhile, a team of hackers attempt to break into the mainframe of this system. When the rebel hacker Flynn attempts to hack into the mainframe of the MCP, he is drawn into the digital world of the computer which is under the dominion of the MCP. Sark, one of the digital beings who serves the MCP, is tasked with killing Flynn.
Is human anxiety over the potential for super-powered AI justified? Would things truly be better if machines and artificial intelligence made authoritative decisions as global actors and rulers?
What could be the implications of ‘teleporting’ into digital space in terms of alienation from the real world? For now, it seems that humans are in charge of computers in the “real” world; if humans were to enter a digital world, who would be in charge? Do AI beings owe subservience to humans for their creation, given their increasing intelligence?
-
- 14 min
- Kinolab
- 2014
Caleb, a programmer in a large company, is invited by his boss Nathan to test a robot named Ava. During one session of the Turing Test, Ava fearfully interrogates Caleb on what her fate will be if she is deemed not capable or human enough by the results of the test. Caleb struggles to deliver the honest answer, especially given that Ava displays attachment toward him, a sentiment which he returns. After Caleb discovers that Nathan wants to essentially kill Ava, he loops her in to his escape plan, offering her freedom and a chance to live a human life. Once Nathan is killed, Ava goes to his robotics repository and bestows a new physical, humanlike appearance upon herself. She then permanently traps Caleb, the only remaining person who knows she is an android, in Nathan’s compound before escaping to live a human life in the real world.
- Kinolab
- 2014
Liberty, Autonomy, and Desires of Humanoid Robots
Caleb, a programmer in a large company, is invited by his boss Nathan to test a robot named Ava. During one session of the Turing Test, Ava fearfully interrogates Caleb on what her fate will be if she is deemed not capable or human enough by the results of the test. Caleb struggles to deliver the honest answer, especially given that Ava displays attachment toward him, a sentiment which he returns. After Caleb discovers that Nathan wants to essentially kill Ava, he loops her in to his escape plan, offering her freedom and a chance to live a human life. Once Nathan is killed, Ava goes to his robotics repository and bestows a new physical, humanlike appearance upon herself. She then permanently traps Caleb, the only remaining person who knows she is an android, in Nathan’s compound before escaping to live a human life in the real world.
What rights to freedom do AI have? Do sentient AI beings deserve to be at the mercy of their creators? What are the consequences of machines being able to detect and expose lies? Is emotional attachment to AI a valid form of love? What threat could well-disguised, hyper-intelligent AI pose for humanity? If no one knows or can tell the difference, does that matter?
-
- 9 min
- Kinolab
- 2017
Robert Daly is a programmer at the company Callister, which developed the immersive virtual reality game Infinity and its community for the entertainment of users. Daly is typically seen in the shadow of the co-founder of the company, the charismatic James Walton. Unbeknownst to anyone else, Daly possesses a personal modification of the Infinity game program, where he is able to upload sentient digital clones of his co-workers to take out his frustrations upon. In this narrative, Nannette Cole becomes his newest victim after her DNA is used to draw her into the virtual reality. After Daly’s sexist and violent treatment of her and the other crewmates, Nannette inspires a mutiny to escape Daly’s world. In order to help the team carry out the plan, she seduces Daly as a distraction.
- Kinolab
- 2017
Virtual Vindictiveness and Simulated Clones Part II: Daly and Cole
Robert Daly is a programmer at the company Callister, which developed the immersive virtual reality game Infinity and its community for the entertainment of users. Daly is typically seen in the shadow of the co-founder of the company, the charismatic James Walton. Unbeknownst to anyone else, Daly possesses a personal modification of the Infinity game program, where he is able to upload sentient digital clones of his co-workers to take out his frustrations upon. In this narrative, Nannette Cole becomes his newest victim after her DNA is used to draw her into the virtual reality. After Daly’s sexist and violent treatment of her and the other crewmates, Nannette inspires a mutiny to escape Daly’s world. In order to help the team carry out the plan, she seduces Daly as a distraction.
What should the ethical boundaries be in terms of creating digital copies of real-life people to manipulate in virtual realities? How would this alter the perception of autonomy or entitlement? Should the capability to create exact digital likenesses of real people be created for any reason? If so, how should their autonomy be ensured, since they are technically a piece of programming? How can bias, and more specifically the objectification of women, be eliminated in such conceptualisations? Are digital copies of a person entitled to the same rights that their corporeal selves have?
-
- 15 min
- Kinolab
- 2016
Westworld, a western-themed amusement park, is populated by realistic robotic creatures known as “hosts” that are designed in a lab and constantly updated to seem as real and organic as possible. These robots then play out scripted “narratives” day after day with the guests of the park with memory being erased on each cycle, allowing the customers to fulfill any dark desire they wish with the hosts. After the new update to the androids, one modeled after a sheriff malfunctions in front of customers while playing out his narrative, causing a debate in the lab over whether making sure the robots are alright and functional is worth the inconvenience to the guests. Lee and Theresa, two workers from the lab, eventually discuss whether or not the updates should continue to make the robots more and more human. Lee is especially skeptical of how the profit motive tends to drive the innovation further and further toward the incorporation of completely human robots into the fantasies of high payers. When the lab team inspects the decommissioned host robot Peter Abernathy, he displays a deep and uncharacteristic concern for his daughter Dolores before going off script and speaking of his contempt toward his creators. This is then dismissed by the lab team as parts of his old programming resurfacing in a combination that made the emotions seem more realistic.
- Kinolab
- 2016
Repetitious Robots and Programming for Human Pleasure
Westworld, a western-themed amusement park, is populated by realistic robotic creatures known as “hosts” that are designed in a lab and constantly updated to seem as real and organic as possible. These robots then play out scripted “narratives” day after day with the guests of the park with memory being erased on each cycle, allowing the customers to fulfill any dark desire they wish with the hosts. After the new update to the androids, one modeled after a sheriff malfunctions in front of customers while playing out his narrative, causing a debate in the lab over whether making sure the robots are alright and functional is worth the inconvenience to the guests. Lee and Theresa, two workers from the lab, eventually discuss whether or not the updates should continue to make the robots more and more human. Lee is especially skeptical of how the profit motive tends to drive the innovation further and further toward the incorporation of completely human robots into the fantasies of high payers. When the lab team inspects the decommissioned host robot Peter Abernathy, he displays a deep and uncharacteristic concern for his daughter Dolores before going off script and speaking of his contempt toward his creators. This is then dismissed by the lab team as parts of his old programming resurfacing in a combination that made the emotions seem more realistic.
How can reality be “mixed” using AI and Robotics for recreational purposes? What might be some consequences to tailoring life-like robots solely to human desires? What are the implications of programming humanoid robots to perform repetitious tasks tirelessly without a break? How does this differ from lines of code which direct our computers to complete one task in a loop? How ‘real’ should a realistic Robot be? Can a robot being ‘too realistic’ be something that should scare or worry us? What determines the ‘realness’ of AI emotions? Is it all just programming, or is there a point at which the emotions become undistinguishable from human emotions?