Robotics (62)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 5 min
- MIT Technology Review
- 2019
Humans take the blame for failures of AI automated systems, protecting the integrity of the technological system and becoming a “liability sponge.” It is necessary to redefine the role of humans in sociotechnical systems.
- MIT Technology Review
- 2019
-
- 5 min
- MIT Technology Review
- 2019
When algorithms mess up, the nearest human gets the blame
Humans take the blame for failures of AI automated systems, protecting the integrity of the technological system and becoming a “liability sponge.” It is necessary to redefine the role of humans in sociotechnical systems.
Should humans take the blame for algorithm-created harm? At what level (development, corporate, or personal) should this liability occur?
-
- 13 min
- Kinolab
- 2020
George Almore is an engineer working with a company which hopes to achieve singularity with robots, making their artificial intelligence one step above real humans. In doing this, he works with three prototypes: J1, J2, and J3, each one more advanced than the last. Simultaneously, he plans to upload his dead wife’s consciousness into the J3 robot in order to extend her life. The narrative begins with him explaining his goal to J3 as he has this robot go through taste and emotion tests. Eventually, J3 has evolved into a humanoid robot who takes on the traits of George’s wife, leaving the earlier two versions, who all have a sibling-like bond with each other, feeling neglected.
- Kinolab
- 2020
Prototypes, Evolution, and Replacement with Robots
George Almore is an engineer working with a company which hopes to achieve singularity with robots, making their artificial intelligence one step above real humans. In doing this, he works with three prototypes: J1, J2, and J3, each one more advanced than the last. Simultaneously, he plans to upload his dead wife’s consciousness into the J3 robot in order to extend her life. The narrative begins with him explaining his goal to J3 as he has this robot go through taste and emotion tests. Eventually, J3 has evolved into a humanoid robot who takes on the traits of George’s wife, leaving the earlier two versions, who all have a sibling-like bond with each other, feeling neglected.
Are taste and emotion examples of necessary elements of creating advanced AI? If so, why? What good does having these abilities serve in terms of the AI’s relationship to the human world? Is it right to transfer consciousness or elements of consciousness from a deceased person into one or several AI? In the AI, how much is too much similarity to a pre-existing person? Can total similarity ever be achieved, and how? Can advanced AI feel negative human emotions and face mental health problems such as depression? Is it ethical to program AI to feel such emotions, knowing the risks associated with them, including bonding with former or flawed prototypes of itself? If an AI kills itself, does the onus fall on the machine or the human creator?
-
- 4 min
- Kinolab
- 2001
David and Joe, two humanoid androids known as “Mechas,” are captured to be featured in the “Flesh Fair.” In this horrifying attraction, ringmaster Lord Johnson-Johnson destroys Mechas in front of an enthusiastic crowd using brutal and painful torture methods. However, as David begs for his own life, the crowd hesitates.
- Kinolab
- 2001
Robots and Humankind Purists
David and Joe, two humanoid androids known as “Mechas,” are captured to be featured in the “Flesh Fair.” In this horrifying attraction, ringmaster Lord Johnson-Johnson destroys Mechas in front of an enthusiastic crowd using brutal and painful torture methods. However, as David begs for his own life, the crowd hesitates.
How can it be ensured that robots which very closely resemble humans do not face unjust punishment or torture solely on the basis of being robots? How likely is humankind to be totally tolerant of androids like the Mechas? Who can properly advocate for AI rights? Would there need to be AI in government for this to be effective?
-
- 12 min
- Kinolab
- 1982
In dystopian 2019 Los Angeles, humanoid robots known as “replicants” are on the loose, and must be tracked down and killed by bounty hunters. The normal role for replicants is to serve as laborers in space colonies; they previously were not meant to incorporate into human society. The first two clips demonstrate the Voigt-Kampff test, this universe’s Turing Test to determine if someone is a replicant or a human. While the android Leon is discovered and retaliates quickly, Rachel, a more advanced model of android, is able to hide her status as an android for longer because she herself believes she is human due to implanted memories. When this secret is revealed to Rachel, she becomes quite upset.
- Kinolab
- 1982
Distinguishing Between Robots and Humans
In dystopian 2019 Los Angeles, humanoid robots known as “replicants” are on the loose, and must be tracked down and killed by bounty hunters. The normal role for replicants is to serve as laborers in space colonies; they previously were not meant to incorporate into human society. The first two clips demonstrate the Voigt-Kampff test, this universe’s Turing Test to determine if someone is a replicant or a human. While the android Leon is discovered and retaliates quickly, Rachel, a more advanced model of android, is able to hide her status as an android for longer because she herself believes she is human due to implanted memories. When this secret is revealed to Rachel, she becomes quite upset.
Will “Turing Tests” such as the one shown here become more common practice if AI become seemingly indistinguishable from humans? In this universe, the principal criteria for discovering an android is seeing if they display empathy toward animals. Is this a fair criterion to judge a machine? Do all humans show empathy toward animals? If AI can replicate humans, do they need to disclose their status as an android? Why? What makes Rachel’s life less “real” than any other humans? What are the dangers of giving away human memories to AI?
-
- 11 min
- Kinolab
- 1982
Roy Batty is a rogue humanoid android, known as a “replicant,” who escaped his position as an unpaid laborer in a space colony and now lives among humans on Earth. After discovering that he only has a lifespan of four years, Roy breaks into the penthouse of his creator Eldon Tyrell and implores him to find a way to prolong his life. After Tyrell refuses and lauds Roy’s advanced design, Roy kills Tyrell, despite seeing him as a sort of father figure. After fleeing from the penthouse, he is found by android bounty hunter Rick Deckard, who proceeds to chase him across the rooftops. After a short confrontation with Deckard, Roy delivers a monologue explaining his sorry state of affairs.
- Kinolab
- 1982
Meaning and Duration of Android Lives
Roy Batty is a rogue humanoid android, known as a “replicant,” who escaped his position as an unpaid laborer in a space colony and now lives among humans on Earth. After discovering that he only has a lifespan of four years, Roy breaks into the penthouse of his creator Eldon Tyrell and implores him to find a way to prolong his life. After Tyrell refuses and lauds Roy’s advanced design, Roy kills Tyrell, despite seeing him as a sort of father figure. After fleeing from the penthouse, he is found by android bounty hunter Rick Deckard, who proceeds to chase him across the rooftops. After a short confrontation with Deckard, Roy delivers a monologue explaining his sorry state of affairs.
Should robots who are modeled to act like real humans be given a predetermined, short lifespan? Should robots who are modeled to act like real humans ever be expected to complete uncompensated work? How should creators of robots give their creations the opportunity to make meaning of their lives? Who is ultimately responsible to “parent” a sentient AI?
-
- 7 min
- Kinolab
- 2002
In the year 2054, the PreCrime police program is about to go national. At PreCrime, three clairvoyant humans known as “PreCogs” are able to forecast future murders by streaming audiovisual data which provides the surrounding details of the crime, including the names of the victims and perpetrators. Joe Anderson, the former head of the PreCrime policing program, is named as a future perpetrator and must flee from his former employer. Due to the widespread nature of retinal scanning biometric technology, he is found quickly, and thus must undergo an eye transplant. While recovering in a run-down apartment, the PreCrime officers deploy spider-shaped drones to scan the retinas of everyone in the building.
- Kinolab
- 2002
Retinal Scans and Immediate Identification
In the year 2054, the PreCrime police program is about to go national. At PreCrime, three clairvoyant humans known as “PreCogs” are able to forecast future murders by streaming audiovisual data which provides the surrounding details of the crime, including the names of the victims and perpetrators. Joe Anderson, the former head of the PreCrime policing program, is named as a future perpetrator and must flee from his former employer. Due to the widespread nature of retinal scanning biometric technology, he is found quickly, and thus must undergo an eye transplant. While recovering in a run-down apartment, the PreCrime officers deploy spider-shaped drones to scan the retinas of everyone in the building.
Is it possible that people would consent to having their retinas scanned in general public places if it meant a more personalized experience of that space? Should government be able to deceive people into giving up their private data, as social media companies already do? How can people protect themselves from retinal scanning and other biometric identification technologies on small and large scales?