Possibility of technologies such as AI developing human emotions and questions of AI rights
AI Emotions and Rights (37)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 7 min
- Kinolab
- 1968
Dr. Dave Bowman and Dr. Frank Poole are two astronauts on the mission Discovery to Jupiter. They are joined by HAL, an artificial intelligence machine named after the most recent iteration of his model, the HAL 9000 computer. HAL is seen as just another member of the crew based upon his ability to carry conversations with the other astronauts and his responsibilities for keeping the crew safe.
- Kinolab
- 1968
HAL Part I: AI Camaraderie and Conversation
Dr. Dave Bowman and Dr. Frank Poole are two astronauts on the mission Discovery to Jupiter. They are joined by HAL, an artificial intelligence machine named after the most recent iteration of his model, the HAL 9000 computer. HAL is seen as just another member of the crew based upon his ability to carry conversations with the other astronauts and his responsibilities for keeping the crew safe.
Should humans count on AI entirely to help keep them safe in dangerous situations or environments? Do you agree with Dave’s assessment that one can “never tell” if an AI has real feelings? What counts as “real feelings”? Even if HAL’s human tendencies follow a line of programming, does this make them less real?
-
- 11 min
- Kinolab
- 1990
Commander Data, an android, uses his technological skills to acquire knowledge to create a new android, his daughter Lal, in his own image without human help or oversight. He then guides Lal through the process of incorporating into the human world through means such as allowing her to choose her own gender and appearance, teaching her about laughter, and warning about human perception of difference. Ultimately, when he is asked to turn his daughter over to Star Fleet, he refuses on the grounds that it is his obligation as Lal’s parent to help her mature and acclimate to society, and captain Picard agrees that Lal is no one’s property but rather Data’s own child.
- Kinolab
- 1990
The Offspring: Robotic Reproduction and Rights to a Parental Role
Commander Data, an android, uses his technological skills to acquire knowledge to create a new android, his daughter Lal, in his own image without human help or oversight. He then guides Lal through the process of incorporating into the human world through means such as allowing her to choose her own gender and appearance, teaching her about laughter, and warning about human perception of difference. Ultimately, when he is asked to turn his daughter over to Star Fleet, he refuses on the grounds that it is his obligation as Lal’s parent to help her mature and acclimate to society, and captain Picard agrees that Lal is no one’s property but rather Data’s own child.
If robots such as Data and Lal exist as close to human sentience as they do, can they ever truly “belong” to anyone? How does Lal’s ability to choose her own appearance and gender (and by extension the capability of humanoid robots to appear in myriad different ways) complicate questions of human identity? Would humans have a right to control technological procreation as a means of limiting singularity?
-
- 2 min
- Kinolab
- 1982
Tron, a security program within the digital world, is thought dead and mourned by fellow programs Yori and Dumont.
- Kinolab
- 1982
Bonding, Creation, and Religion among the Digital
Tron, a security program within the digital world, is thought dead and mourned by fellow programs Yori and Dumont.
Can programmed AI develop emotions and attachment to its maker? Could this be considered a sort of religious freedom for artificial intelligence? If so, is it ethical to use super-intelligent AI without considering its rights?
-
- 5 min
- Kinolab
- 1990
Data’s father reveals that he would like to give him an “emotions” chip in hopes that it will enhance his experience living among humans and increase his trust in others. However, Data is concerned after seeing how emotions cause his brother to seem more volatile.
- Kinolab
- 1990
Emotion Chip: Bringing Robotic Life Closer to Human Life
Data’s father reveals that he would like to give him an “emotions” chip in hopes that it will enhance his experience living among humans and increase his trust in others. However, Data is concerned after seeing how emotions cause his brother to seem more volatile.
Can we and should we program AI to have emotions? What implications do emotions have for AI rights? If humans count on AI for quick and objective decision making, what impact might AI emotions have on this goal?
-
- 5 min
- Silicon Angle
- 2019
Artificial Companions assist developmentally disabled kids based on the principle that humans can indeed form emotional connections with nonhuman objects. In fact, it is not exceedingly difficult for robots to read or mirror human emotions, which could have positive implications in workplace or educational settings.
- Silicon Angle
- 2019
-
- 5 min
- Silicon Angle
- 2019
Empathic AI mirrors human emotions to help autistic children
Artificial Companions assist developmentally disabled kids based on the principle that humans can indeed form emotional connections with nonhuman objects. In fact, it is not exceedingly difficult for robots to read or mirror human emotions, which could have positive implications in workplace or educational settings.
Is it possible to develop an emotional connection with a robotic companion? How can robotic company improve behaviour? How does it compare to human company?
-
- 5 min
- MIT Media Lab
- 2019
Using personal robots to help children learn can be more effective when robots exhibit relational habits, such as mirroring kids’ pitch. Potential positive narrative of children and personal robots/artificial intelligence.
- MIT Media Lab
- 2019
-
- 5 min
- MIT Media Lab
- 2019
Kids’ relationships and learning with social robots
Using personal robots to help children learn can be more effective when robots exhibit relational habits, such as mirroring kids’ pitch. Potential positive narrative of children and personal robots/artificial intelligence.
Should we create robots that can act as peers to younger children? Are there drawbacks, such as isolation from true human contact, that may arise from building robot peers that so effectively mirror human traits?