All Narratives (355)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 5 min
- Vice
- 2020
Robot researches in Japan have recently begun to use robotic “monster wolves” to help control wildlife populations by keeping them out of human civilizations or agricultural areas. These robots are of interest to robot engineers who work in environmentalism because although the process of engineering a robot does not help the environment, the ultimate good accomplished by robots which help control wildlife populations may outweigh this cost.
- Vice
- 2020
-
- 5 min
- Vice
- 2020
Robotic Beasts, Wildlife Control, and Environmental Impact
Robot researches in Japan have recently begun to use robotic “monster wolves” to help control wildlife populations by keeping them out of human civilizations or agricultural areas. These robots are of interest to robot engineers who work in environmentalism because although the process of engineering a robot does not help the environment, the ultimate good accomplished by robots which help control wildlife populations may outweigh this cost.
What are all the ways, aside from those mentioned in the article, in which robots and robotics could be utilised in environmentalist and conservationist causes? How could robots meant to tell wildlife where not to travel be misused?
-
- 8 min
- Kinolab
- 1984
Rotwang, a reclusive inventor, invents a robot to replace his love Hel whom he lost to Joh Frederson. He claims that it has everything it needs to replace her except for a soul. Joh Frederson takes advantage of the robot’s design as an artificial companion to imitate Maria’s likeness, essentially creating a copy of her. The purpose of this is to infiltrate the working class and use Maria, who the workers admire, as a tool to further Joh Frederson’s agenda to suppress a laborer’s manifestation. The workers have unknowlingly placed so much trust into the robot version of Maria that they refuse to listen to Grot as a fellow worker, destroying the Heart Machine as Joh intended.
- Kinolab
- 1984
Robotic Impostors
Rotwang, a reclusive inventor, invents a robot to replace his love Hel whom he lost to Joh Frederson. He claims that it has everything it needs to replace her except for a soul. Joh Frederson takes advantage of the robot’s design as an artificial companion to imitate Maria’s likeness, essentially creating a copy of her. The purpose of this is to infiltrate the working class and use Maria, who the workers admire, as a tool to further Joh Frederson’s agenda to suppress a laborer’s manifestation. The workers have unknowlingly placed so much trust into the robot version of Maria that they refuse to listen to Grot as a fellow worker, destroying the Heart Machine as Joh intended.
How can robots, even those without weapons, be used to stifle dissent and rebellion? What are the consequences to making robots in the likeness of loved ones or admirable figures? How can this be used to trick people without their knowledge? Should robots ever be able to imitate real people, especially if it is hard to give them a “soul”? What is a soul?
-
- 5 min
- Wired
- 2021
This narrative describes the recent AI Incident Database launched at the end of 2020, where companies report case studies in which applied machine learning algorithms did not function as intended or caused real-world harm. The goal is to operate in a sense similar to air travel safety report programs; with this database, technological developers can get a sense of how to make algorithms which are more safe and fair while having the incentive to take precautions to stay off the list.
- Wired
- 2021
-
- 5 min
- Wired
- 2021
Don’t End Up on This Artificial Intelligence Hall of Shame
This narrative describes the recent AI Incident Database launched at the end of 2020, where companies report case studies in which applied machine learning algorithms did not function as intended or caused real-world harm. The goal is to operate in a sense similar to air travel safety report programs; with this database, technological developers can get a sense of how to make algorithms which are more safe and fair while having the incentive to take precautions to stay off the list.
What is your opinion on this method of accountability? Is there anything it does not take into account? Is it possible that some machine learning algorithms make mistakes that cannot even be detected by humans? How can this be avoided? How can the inner workings of machine learning algorithms be made more understandable and digestible by the general public?
-
- 10 min
- The New York Times
- 2021
This article tells the story of Chris Merkle, a former U.S Marine soldier who was able to work through former traumatic memories and PTSD using virtual realities similar to his lived experiences in war as a form of exposure therapy. As virtual reality sets become more affordable and commercialized, and as experts and universities develop more impressive virtual and augmented reality technologies, the opportunities for exposure therapy through VR technology become far more widespread, with the potential to help civilian disorders and traumas as well as those of veterans.
- The New York Times
- 2021
-
- 10 min
- The New York Times
- 2021
Virtual Reality Aids in Exposure Therapy
This article tells the story of Chris Merkle, a former U.S Marine soldier who was able to work through former traumatic memories and PTSD using virtual realities similar to his lived experiences in war as a form of exposure therapy. As virtual reality sets become more affordable and commercialized, and as experts and universities develop more impressive virtual and augmented reality technologies, the opportunities for exposure therapy through VR technology become far more widespread, with the potential to help civilian disorders and traumas as well as those of veterans.
How can it be ensured that this type of therapy is accessible to all people? How can it be ensured that this type of therapy does not interfere with other forms of therapy or treatment? Should this become the norm for treating mental health disorders? How might this alter people’s perceptions of reality, for better or for worse?
-
- 41 min
- The New York Times
- 2021
In this podcast episode, Ellen Pao, an early whistleblower on gender bias and racial discrimination in the tech industy, tells the story of her experience suing the venture capital firm Kleiner Perkins for gender discrimination. The episode then moves into a discussion of how Silicon Valley, and the tech industry more broadly, is dominated by white men who do not try to deeply understand or move toward racial or gender equity; instead, they focus on PR moves. Specifically, she reveals that social media companies and CEOs can be particularly performative when it comes to addressing racial or gender inequality, focusing on case studies rather than breeding a new, more fair culture.
- The New York Times
- 2021
Sexism and Racism in Silicon Valley
In this podcast episode, Ellen Pao, an early whistleblower on gender bias and racial discrimination in the tech industy, tells the story of her experience suing the venture capital firm Kleiner Perkins for gender discrimination. The episode then moves into a discussion of how Silicon Valley, and the tech industry more broadly, is dominated by white men who do not try to deeply understand or move toward racial or gender equity; instead, they focus on PR moves. Specifically, she reveals that social media companies and CEOs can be particularly performative when it comes to addressing racial or gender inequality, focusing on case studies rather than breeding a new, more fair culture.
How did Silicon Valley and the technology industry come to be dominated by white men? How can this be addressed, and how can the culture change? How can social networks in particular be re-imagined to open up doors to more diverse leadership and workplace cultures?
-
- 12 min
- Kinolab
- 1973
Simulacron is a virtual reality full of 10,000 simulated humans who believe themselves to be sentient, but are actually nothing more than programs. The identity units in Simulacron do not know or understand that they are artificial beings, and they behave under the idea that they are real humans. “Real” humans can enter this virtual reality through a brain-computer interface, and control the virtual identity units. Christopher Nobody, a suspect whom Fred is trying to track down, had the revelation that he was an identity unit, and that realization led to a mental breakdown. In following this case, Fred meets Einstein, a virtual unit who desires to join the real world. As Einstein enacts the final stages of this plan, Fred discovers a shocking secret about his own identity. For a similar concept, see the narrative “Online Dating Algorithms” on the Hang the DJ episode of Black Mirror.
- Kinolab
- 1973
Simulated Humans and Virtual Realities
Simulacron is a virtual reality full of 10,000 simulated humans who believe themselves to be sentient, but are actually nothing more than programs. The identity units in Simulacron do not know or understand that they are artificial beings, and they behave under the idea that they are real humans. “Real” humans can enter this virtual reality through a brain-computer interface, and control the virtual identity units. Christopher Nobody, a suspect whom Fred is trying to track down, had the revelation that he was an identity unit, and that realization led to a mental breakdown. In following this case, Fred meets Einstein, a virtual unit who desires to join the real world. As Einstein enacts the final stages of this plan, Fred discovers a shocking secret about his own identity. For a similar concept, see the narrative “Online Dating Algorithms” on the Hang the DJ episode of Black Mirror.
What purposes can virtual reality “laboratories” full of simulated humans serve in terms of research in fields such as sociology? Is it justifiable to make programs which believe themselves to be sentient humans, yet deny them access to the “real world”? How can AI mental health be reassured, especially when it comes to existential crises like the one Fred has?