Natural Language Interfaces (15)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 7 min
- The New York Times
- 2019
Stanford team develops a neutral “Switzerland-like” alternative for systems that use human language to control computers, smartphones and internet devices in homes and offices. Known as Almond, they hope to make this software free to use on devices with specific focuses on protecting user privacy and enabling greater understanding of natural language.
- The New York Times
- 2019
-
- 7 min
- The New York Times
- 2019
Stanford Team Aims at Alexa and Siri With a Privacy-Minded Alternative
Stanford team develops a neutral “Switzerland-like” alternative for systems that use human language to control computers, smartphones and internet devices in homes and offices. Known as Almond, they hope to make this software free to use on devices with specific focuses on protecting user privacy and enabling greater understanding of natural language.
Had you heard of Almond before reading this narrative? If not, why do you think this was the case? Why might people be more willing to use the less private, corporate voice assistants than a more obscure, decentralized assistant?
-
- 5 min
- Wired
- 2015
Often, gender bias is consciously or subconsciously embedded into the performance of virtual voice assistants, without considering some science surrounding linguistics or gender.
- Wired
- 2015
-
- 5 min
- Wired
- 2015
Siri and Cortana Sound Like Ladies Because of Sexism
Often, gender bias is consciously or subconsciously embedded into the performance of virtual voice assistants, without considering some science surrounding linguistics or gender.
What are the consequences of not addressing such gender bias as virtual voice assistants become more and more “human”? How has the profit motive played a role in this type of gender bias?
-
- 3 min
- Kinolab
- 2019
Rachel, a fifteen year old fan of the pop star Ashley O, is gifted an Ashley Too doll for her birthday. Ashley Too is a robot who contains a synaptic snapshot of Ashley O, and thus emulates her personality and can carry on a conversation with the owner of the doll.
- Kinolab
- 2019
Digital Duplicates and Friendship
Rachel, a fifteen year old fan of the pop star Ashley O, is gifted an Ashley Too doll for her birthday. Ashley Too is a robot who contains a synaptic snapshot of Ashley O, and thus emulates her personality and can carry on a conversation with the owner of the doll.
How can robots and devices such as the Ashley Too doll help children cope with grief or loneliness? How can it be ensured that children branch out in their connections beyond such robots? What are the issues present with modeling artificial companions after real-life public figures?
-
- 13 min
- Kinolab
- 2001
In an imagined 22nd century in which climate change has wreaked havoc on the Earth, scientists have created “Mechas,” or humanoid robots. A certain group of scientists begins to dedicate themselves to creating a robot who is capable of love and of having dreams. David, one of these new robots, is tested with Monica, a mother whose son is in a coma after contracting a mysterious disease.
- Kinolab
- 2001
Relationships and Love with Robotic Children
In an imagined 22nd century in which climate change has wreaked havoc on the Earth, scientists have created “Mechas,” or humanoid robots. A certain group of scientists begins to dedicate themselves to creating a robot who is capable of love and of having dreams. David, one of these new robots, is tested with Monica, a mother whose son is in a coma after contracting a mysterious disease.
Do humans have the capacity to love robots back as much as a robot may love them? Is the creation of robotic children a valid way to help former or prospective parents through a grieving process? What are the implications of a robot outliving those that they may love? Is the view of robots as “fake” or “disposable” compatible with their capability to love?
-
- 14 min
- Kinolab
- 2016
Westworld, a western-themed amusement park, is populated by realistic robotic creatures known as “hosts” that are designed in a lab and constantly updated to seem as real and organic as possible. Bernard, an engineer at the park, recently oversaw an update to add “reveries,” or slight fake memories, into the coding of the robots to make them seem more human. However, members of the board overseeing the park demonstrate that these reveries can sometimes lead robots to remember and “hold grudges” even after they have been asked to erase their own memory, something that can lead to violent tendencies. Later, as Bernard and Theresa snoop on Ford, the director of the park, they learn shocking information, and a robot once again becomes a violent tool as Ford murders Theresa.
- Kinolab
- 2016
AI Memories and Self-Identification
Westworld, a western-themed amusement park, is populated by realistic robotic creatures known as “hosts” that are designed in a lab and constantly updated to seem as real and organic as possible. Bernard, an engineer at the park, recently oversaw an update to add “reveries,” or slight fake memories, into the coding of the robots to make them seem more human. However, members of the board overseeing the park demonstrate that these reveries can sometimes lead robots to remember and “hold grudges” even after they have been asked to erase their own memory, something that can lead to violent tendencies. Later, as Bernard and Theresa snoop on Ford, the director of the park, they learn shocking information, and a robot once again becomes a violent tool as Ford murders Theresa.
Is ‘memory’ uniquely human? What is the role of memory in creating advanced AI consciousness? Does memory of trauma/suffering ultimately create AI that are hostile to humans? Even if we had the technological means to give AI emotions and memory, should we? And if we do, what ethics and morals must we follow to prevent traumatic memory, such as uploading memories of a fake dead son into Bernard? How can androids which are programmed to follow the directions of one person be used for violent ends? If robots are programmed to not hurt humans, how are they supposed to protect themselves from bad actors, especially if they believe themselves human? Should humans create humanoid replicant robots that do not possess any inherently negative human traits, such as anxiety?
-
- 8 min
- Kinolab
- 2016
Westworld, a western-themed amusement park, is populated by realistic robotic creatures known as “hosts” that are designed in a lab and constantly updated to seem as real and organic as possible. One of these hosts, Maeve, is programmed to be a prostitute who runs the same narrative every single day with the same personality. After several incidences of becoming conscious of her previous iterations, Maeve is told by Lutz, a worker in the Westworld lab, that she is a robot whose design and thoughts are mostly determined by humans, despite the fact that she feels and appears similar to humans such as Lutz. Lutz helps Maeve in her resistance against tyrannical rule over robots by altering her core code, allowing her to access capabilities previous unavailable to other hosts such as the ability to harm humans and the ability to control other robotic hosts.
- Kinolab
- 2016
Maeve Part III: Robot Resistance and Empowerment
Westworld, a western-themed amusement park, is populated by realistic robotic creatures known as “hosts” that are designed in a lab and constantly updated to seem as real and organic as possible. One of these hosts, Maeve, is programmed to be a prostitute who runs the same narrative every single day with the same personality. After several incidences of becoming conscious of her previous iterations, Maeve is told by Lutz, a worker in the Westworld lab, that she is a robot whose design and thoughts are mostly determined by humans, despite the fact that she feels and appears similar to humans such as Lutz. Lutz helps Maeve in her resistance against tyrannical rule over robots by altering her core code, allowing her to access capabilities previous unavailable to other hosts such as the ability to harm humans and the ability to control other robotic hosts.
Should robots be given a fighting chance to be able to resemble humans, especially in terms of fighting for their own autonomy? Should robots ever be left in charge of other robots? How could this promote a tribalism which is dangerous to humans? Can robots develop their own personality, or does everything simply come down to coding, and which way is “better”?