Themes (326)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 10 min
- Kinolab
- 2017
In the year 2049, humanoid robots known as “replicants” work as slave laborers in various space colonies for humankind. “Blade Runners,” like K shown here, are specialized police officers who are tasked with tracking down and killing escaped robots. Throughout the years, models have been getting more advanced and human-like, which is one of the reasons K, a newest model of replicant, is tasked to kill the farmer, an older model. The ultimate goal of corporate villain CEO Niander Wallace is to create replicants which can reproduce exactly has humans can, essentially becoming an infinite resource of human labor. He sees the newest “Angel” model as being the key to this.
- Kinolab
- 2017
Robot Expendability and Labor
In the year 2049, humanoid robots known as “replicants” work as slave laborers in various space colonies for humankind. “Blade Runners,” like K shown here, are specialized police officers who are tasked with tracking down and killing escaped robots. Throughout the years, models have been getting more advanced and human-like, which is one of the reasons K, a newest model of replicant, is tasked to kill the farmer, an older model. The ultimate goal of corporate villain CEO Niander Wallace is to create replicants which can reproduce exactly has humans can, essentially becoming an infinite resource of human labor. He sees the newest “Angel” model as being the key to this.
If robots are created to essentially live human lives, can they simply be destroyed once their model is outdated and something newer comes along? Are AI entitled to compensation and reward for any labor they complete, especially if they experience sensations in a way similar to humans? If AI are minding their own business and not harming anyone, do they need to be eliminated? Who can prevent corporations from using humanoid robots as unpaid laborers, and how? Should robots ever be forced to destroy their own kind?
-
- 7 min
- The New Republic
- 2020
The narrative of Dr. Timnit Gebru’s termination from Google is inextricably bound with Google’s irresponsible practices with training data for its machine learning algorithms. Using large data sets to train Natural Language Processing algorithms is ultimately a harmful practice because for all the harms to the environment and biases against certain languages it causes, machines still cannot fully comprehend human language.
- The New Republic
- 2020
-
- 7 min
- The New Republic
- 2020
Who Gets a Say in Our Dystopian Tech Future?
The narrative of Dr. Timnit Gebru’s termination from Google is inextricably bound with Google’s irresponsible practices with training data for its machine learning algorithms. Using large data sets to train Natural Language Processing algorithms is ultimately a harmful practice because for all the harms to the environment and biases against certain languages it causes, machines still cannot fully comprehend human language.
Should machines be trusted to handle and process the incredibly nuanced meaning of human language? How do different understandings of what languages and words mean and represent become harmful when a minority of people are deciding how to train NLP algorithms? How do tech monopolies prevent more diverse voices from entering this conversation?
-
- 6 min
- Vox
- 2020
Even virtual realities with unrealistic yet believable graphics are able to fool the brain’s sense of perception into believing that the digital environment still operates under the same rules as the real world. Connecting the technologies directly to one’s senses is more immersive than looking at a screen; although human brains have been able to process flat images for a long time, the direct sight connection to two screens with virtual reality makes perception a bit more muddled.
- Vox
- 2020
How Virtual Reality Tricks Your Brain
Even virtual realities with unrealistic yet believable graphics are able to fool the brain’s sense of perception into believing that the digital environment still operates under the same rules as the real world. Connecting the technologies directly to one’s senses is more immersive than looking at a screen; although human brains have been able to process flat images for a long time, the direct sight connection to two screens with virtual reality makes perception a bit more muddled.
Should virtual reality ever reach a point where it is indistinguishable from true reality in terms of graphic design or other sensory information? How could such technology be weaponized or abused? How accessible should the most immersive virtual reality technologies be to the general public?
-
- 7 min
- Kinolab
- 2016
Westworld, a western-themed amusement park, is populated by realistic robotic creatures known as “hosts” that are designed in a lab and constantly updated to seem as real and organic as possible. Dolores, one of these hosts, begins to fall in love with William, a human visitor, and he reciprocates those feelings as he expresses his unhappiness with a planned marriage waiting for him in the real world outside the park. After Dolores is initially angry, she nonetheless rejoins forces with William to search for a place beyond the theme-park Western reality that she has always known.
- Kinolab
- 2016
Relationships and Escapism with AI
Westworld, a western-themed amusement park, is populated by realistic robotic creatures known as “hosts” that are designed in a lab and constantly updated to seem as real and organic as possible. Dolores, one of these hosts, begins to fall in love with William, a human visitor, and he reciprocates those feelings as he expresses his unhappiness with a planned marriage waiting for him in the real world outside the park. After Dolores is initially angry, she nonetheless rejoins forces with William to search for a place beyond the theme-park Western reality that she has always known.
Is William’s love for Dolores ‘true’ love, or is it impossible for a human to truly love an AI and vice versa? If AI are programmed to feel emotions, can their love be equally as real as human love? What issues may arise if robots become a means through which humans escape their real life problems and complicated relationships? What are the potential consequences for both robots and people if robots escape the scenario for which they were specifically engineered, and try to live a life in the real world? Should this be allowed?