Ways in which Artificial Intelligence can assist those with mental disabilities or provide companionship
Artificial Companionship and Therapy (13)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 5 min
- Silicon Angle
- 2019
Artificial Companions assist developmentally disabled kids based on the principle that humans can indeed form emotional connections with nonhuman objects. In fact, it is not exceedingly difficult for robots to read or mirror human emotions, which could have positive implications in workplace or educational settings.
- Silicon Angle
- 2019
-
- 5 min
- Silicon Angle
- 2019
Empathic AI mirrors human emotions to help autistic children
Artificial Companions assist developmentally disabled kids based on the principle that humans can indeed form emotional connections with nonhuman objects. In fact, it is not exceedingly difficult for robots to read or mirror human emotions, which could have positive implications in workplace or educational settings.
Is it possible to develop an emotional connection with a robotic companion? How can robotic company improve behaviour? How does it compare to human company?
-
- 5 min
- MIT Media Lab
- 2019
Using personal robots to help children learn can be more effective when robots exhibit relational habits, such as mirroring kids’ pitch. Potential positive narrative of children and personal robots/artificial intelligence.
- MIT Media Lab
- 2019
-
- 5 min
- MIT Media Lab
- 2019
Kids’ relationships and learning with social robots
Using personal robots to help children learn can be more effective when robots exhibit relational habits, such as mirroring kids’ pitch. Potential positive narrative of children and personal robots/artificial intelligence.
Should we create robots that can act as peers to younger children? Are there drawbacks, such as isolation from true human contact, that may arise from building robot peers that so effectively mirror human traits?
-
- 5 min
- Loeb Classical Library, Harvard University Press
- 1916
A brief excerpt on Pygmalion’s love for his “marble maiden,” which could be compared to the human creation of robots for companionship use.
- Loeb Classical Library, Harvard University Press
- 1916
Pygmalion and his Ivory Maid: Ovid’s Metamorphoses, Book X, lines 243-297.
A brief excerpt on Pygmalion’s love for his “marble maiden,” which could be compared to the human creation of robots for companionship use.
How does Pygmalion’s love for his “marble maiden” resemble the relationship some people have with advanced AI dolls or sex dolls? Can advanced AI dolls replace human companionship completely, or is there something inherently unique to our interactions? Venus gives life to Pygmalion’s maiden, so she could be argued to have a soul, but when humans give ‘life’ to advanced AI, is that similar or different to Pygmalion’s maiden? What does it mean to have consciousness if you can act and perform like a human in all capacities?
-
- 14 min
- Kinolab
- 2014
Caleb, a programmer in a large company, is invited by his boss Nathan to test a robot named Ava. During one session of the Turing Test, Ava fearfully interrogates Caleb on what her fate will be if she is deemed not capable or human enough by the results of the test. Caleb struggles to deliver the honest answer, especially given that Ava displays attachment toward him, a sentiment which he returns. After Caleb discovers that Nathan wants to essentially kill Ava, he loops her in to his escape plan, offering her freedom and a chance to live a human life. Once Nathan is killed, Ava goes to his robotics repository and bestows a new physical, humanlike appearance upon herself. She then permanently traps Caleb, the only remaining person who knows she is an android, in Nathan’s compound before escaping to live a human life in the real world.
- Kinolab
- 2014
Liberty, Autonomy, and Desires of Humanoid Robots
Caleb, a programmer in a large company, is invited by his boss Nathan to test a robot named Ava. During one session of the Turing Test, Ava fearfully interrogates Caleb on what her fate will be if she is deemed not capable or human enough by the results of the test. Caleb struggles to deliver the honest answer, especially given that Ava displays attachment toward him, a sentiment which he returns. After Caleb discovers that Nathan wants to essentially kill Ava, he loops her in to his escape plan, offering her freedom and a chance to live a human life. Once Nathan is killed, Ava goes to his robotics repository and bestows a new physical, humanlike appearance upon herself. She then permanently traps Caleb, the only remaining person who knows she is an android, in Nathan’s compound before escaping to live a human life in the real world.
What rights to freedom do AI have? Do sentient AI beings deserve to be at the mercy of their creators? What are the consequences of machines being able to detect and expose lies? Is emotional attachment to AI a valid form of love? What threat could well-disguised, hyper-intelligent AI pose for humanity? If no one knows or can tell the difference, does that matter?
-
- 9 min
- Kinolab
- 2016
Westworld, a western-themed amusement park, is populated by realistic robotic creatures known as “hosts” that are designed in a lab and constantly updated to seem as real and organic as possible. Bernard, an engineer at Westworld, performs a diagnostic test on the robotic host Dolores, in which she analyses a passage from Alice in Wonderland before asking Bernard about his son. Later, the park director, Dr. Ford, tells Bernard of his former partner Arnold, who wished to program consciousness into the robots, essentially using their code as a means to spur their own thoughts. This was ultimately decided against so that the hosts could perform their narratives for the service of the guests and their desires.
- Kinolab
- 2016
Humanity and Consciousness of Humanoid Robots
Westworld, a western-themed amusement park, is populated by realistic robotic creatures known as “hosts” that are designed in a lab and constantly updated to seem as real and organic as possible. Bernard, an engineer at Westworld, performs a diagnostic test on the robotic host Dolores, in which she analyses a passage from Alice in Wonderland before asking Bernard about his son. Later, the park director, Dr. Ford, tells Bernard of his former partner Arnold, who wished to program consciousness into the robots, essentially using their code as a means to spur their own thoughts. This was ultimately decided against so that the hosts could perform their narratives for the service of the guests and their desires.
Can humans and AI develop significant, intimate relationships? If AI can learn the rules of human interaction indistinguishably well, does it matter if we can’t tell the difference between AI and human? What does it mean to have ‘consciousness’? Is programmed consciousness significantly different than biological consciousness? Is it morally acceptable to create humanoid robots with no choice for consciousness all in the name of making humans feel more powerful?
-
- 5 min
- Kinolab
- 2016
Westworld, a western-themed amusement park, is populated by realistic robotic creatures known as “hosts” that are designed in a lab and constantly updated to seem as real and organic as possible. One of the robotic hosts, Dolores, recounts to the human guest William a memory of old times with her father and her attachment to lost cattle throughout the years. Directly afterward, she has a flashback to a former iteration of herself, which was killed in another narrative before being restored by the lab team. Later on in the narrative which William and his sadistic future brother-in-law Logan follow, Logan reveals his darker nature by shooting one of the robots and telling Dolores that she is a robot, a choice which disgusts William. Logan argues that his actions do not morally matter because this is a fake world full of fake people.
- Kinolab
- 2016
Expendability vs. Emotional Connection in Humanoid Robots
Westworld, a western-themed amusement park, is populated by realistic robotic creatures known as “hosts” that are designed in a lab and constantly updated to seem as real and organic as possible. One of the robotic hosts, Dolores, recounts to the human guest William a memory of old times with her father and her attachment to lost cattle throughout the years. Directly afterward, she has a flashback to a former iteration of herself, which was killed in another narrative before being restored by the lab team. Later on in the narrative which William and his sadistic future brother-in-law Logan follow, Logan reveals his darker nature by shooting one of the robots and telling Dolores that she is a robot, a choice which disgusts William. Logan argues that his actions do not morally matter because this is a fake world full of fake people.
Can AI be programmed to feel like it is ‘human’? If AI can form attachments to things or people through programming, is that attachment considered “real”? What are the ethical questions involved with how humans treat advanced AI? Does human morality apply to ‘virtual’ experiences/games? Do human actions in digital realms/against digital beings reveal something about their humanity? Is it important to program consequences into digital environments, even if they come at the expense of total “freedom” for users?