Technologies (319)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 5 min
- Wired
- 2021
This narrative describes the recent AI Incident Database launched at the end of 2020, where companies report case studies in which applied machine learning algorithms did not function as intended or caused real-world harm. The goal is to operate in a sense similar to air travel safety report programs; with this database, technological developers can get a sense of how to make algorithms which are more safe and fair while having the incentive to take precautions to stay off the list.
- Wired
- 2021
-
- 5 min
- Wired
- 2021
Don’t End Up on This Artificial Intelligence Hall of Shame
This narrative describes the recent AI Incident Database launched at the end of 2020, where companies report case studies in which applied machine learning algorithms did not function as intended or caused real-world harm. The goal is to operate in a sense similar to air travel safety report programs; with this database, technological developers can get a sense of how to make algorithms which are more safe and fair while having the incentive to take precautions to stay off the list.
What is your opinion on this method of accountability? Is there anything it does not take into account? Is it possible that some machine learning algorithms make mistakes that cannot even be detected by humans? How can this be avoided? How can the inner workings of machine learning algorithms be made more understandable and digestible by the general public?
-
- 10 min
- The New York Times
- 2021
This article tells the story of Chris Merkle, a former U.S Marine soldier who was able to work through former traumatic memories and PTSD using virtual realities similar to his lived experiences in war as a form of exposure therapy. As virtual reality sets become more affordable and commercialized, and as experts and universities develop more impressive virtual and augmented reality technologies, the opportunities for exposure therapy through VR technology become far more widespread, with the potential to help civilian disorders and traumas as well as those of veterans.
- The New York Times
- 2021
-
- 10 min
- The New York Times
- 2021
Virtual Reality Aids in Exposure Therapy
This article tells the story of Chris Merkle, a former U.S Marine soldier who was able to work through former traumatic memories and PTSD using virtual realities similar to his lived experiences in war as a form of exposure therapy. As virtual reality sets become more affordable and commercialized, and as experts and universities develop more impressive virtual and augmented reality technologies, the opportunities for exposure therapy through VR technology become far more widespread, with the potential to help civilian disorders and traumas as well as those of veterans.
How can it be ensured that this type of therapy is accessible to all people? How can it be ensured that this type of therapy does not interfere with other forms of therapy or treatment? Should this become the norm for treating mental health disorders? How might this alter people’s perceptions of reality, for better or for worse?
-
- 41 min
- The New York Times
- 2021
In this podcast episode, Ellen Pao, an early whistleblower on gender bias and racial discrimination in the tech industy, tells the story of her experience suing the venture capital firm Kleiner Perkins for gender discrimination. The episode then moves into a discussion of how Silicon Valley, and the tech industry more broadly, is dominated by white men who do not try to deeply understand or move toward racial or gender equity; instead, they focus on PR moves. Specifically, she reveals that social media companies and CEOs can be particularly performative when it comes to addressing racial or gender inequality, focusing on case studies rather than breeding a new, more fair culture.
- The New York Times
- 2021
Sexism and Racism in Silicon Valley
In this podcast episode, Ellen Pao, an early whistleblower on gender bias and racial discrimination in the tech industy, tells the story of her experience suing the venture capital firm Kleiner Perkins for gender discrimination. The episode then moves into a discussion of how Silicon Valley, and the tech industry more broadly, is dominated by white men who do not try to deeply understand or move toward racial or gender equity; instead, they focus on PR moves. Specifically, she reveals that social media companies and CEOs can be particularly performative when it comes to addressing racial or gender inequality, focusing on case studies rather than breeding a new, more fair culture.
How did Silicon Valley and the technology industry come to be dominated by white men? How can this be addressed, and how can the culture change? How can social networks in particular be re-imagined to open up doors to more diverse leadership and workplace cultures?
-
- 9 min
- Kinolab
- 1995
In this world, a human consciousness (“ghost”) can inhabit an artificial body (“shell”), thus at once becoming edited humans in a somewhat robotic body. The Puppet Master, a notorious villain in this world, is revealed not to be a human hacker, but a computer program which has gained sentience and gone on to hack the captured shell. It challenges the law enforcement officials of Section 6 and Section 9 saying that it is a life-form and not an AI. It argues that its existence as a self-sustaining program which has achieved singularity is not different from human DNA as a “self-sustaining program.” The Puppet Master specifically references reproduction/offspring, not copying, as a distinguishing feature of living things as opposed to nonliving things. Additionally, it developed emotional connection with Major which led it to select her as a candidate for merging. It references how it can die but live on through the merging and, after Major’s death, in the internet.
- Kinolab
- 1995
Self-Sustaining Programs
In this world, a human consciousness (“ghost”) can inhabit an artificial body (“shell”), thus at once becoming edited humans in a somewhat robotic body. The Puppet Master, a notorious villain in this world, is revealed not to be a human hacker, but a computer program which has gained sentience and gone on to hack the captured shell. It challenges the law enforcement officials of Section 6 and Section 9 saying that it is a life-form and not an AI. It argues that its existence as a self-sustaining program which has achieved singularity is not different from human DNA as a “self-sustaining program.” The Puppet Master specifically references reproduction/offspring, not copying, as a distinguishing feature of living things as opposed to nonliving things. Additionally, it developed emotional connection with Major which led it to select her as a candidate for merging. It references how it can die but live on through the merging and, after Major’s death, in the internet.
Do you agree with the puppet master’s arguments that self-sustaining programs are conceptually the same as human DNA? Why or why not? Has the externalisation of memory made it far more possible for robots to achieve singularity and exist as human-like figures in the world? Is memory the sole feature that helps humans build their identities? List all the comparisons made in this narrative between self-sustaining programs and human genetics and existence.
-
- 5 min
- Gizmodo
- 2020
This article describes the new Amazon Sidewalk feature and subsequently explains why users should not buy into this service. Essentially, this feature uses the internet of things created by Amazon devices such as the Echo or Ring camera to create a secondary network connecting nearby homes which also contain these devices, which is sustained by each home “donating” a small amount of broadband. It is explained that this is a dangerous concept because this smaller network may be susceptible to hackers, putting a large number of users at risk.
- Gizmodo
- 2020
-
- 5 min
- Gizmodo
- 2020
You Need to Opt Out of Amazon Sidewalk
This article describes the new Amazon Sidewalk feature and subsequently explains why users should not buy into this service. Essentially, this feature uses the internet of things created by Amazon devices such as the Echo or Ring camera to create a secondary network connecting nearby homes which also contain these devices, which is sustained by each home “donating” a small amount of broadband. It is explained that this is a dangerous concept because this smaller network may be susceptible to hackers, putting a large number of users at risk.
Why are “secondary networks” like the one described here a bad idea in terms of both surveillance and data privacy? Is it possible for the world to be too networked? How can tech developers make sure the general public has a healthy skepticism toward new devices? Or is it ultimately Amazon’s job to think about the ethical implications of this secondary network before introducing it for profits?
-
- 4 min
- Kinolab
- 1995
In this world, a human consciousness (“ghost”) can inhabit an artificial body (“shell”), thus at once becoming edited humans in a somewhat robotic body. Major, a security officer, sees how a garbage man is sad to know that his ghost has been hacked and filled with false memories of a family, and dives to set up her own reflections with self-identity developed later in the film, especially as she starts to believe that she may be entirely a cyborg with no knowledge of such an existence. Essentially, because the human body has become so thoroughly and regularly augmented with cybernetic parts and even computer brains, defining a real “human” becomes harder and harder.
- Kinolab
- 1995
Identity Through Memory and Data
In this world, a human consciousness (“ghost”) can inhabit an artificial body (“shell”), thus at once becoming edited humans in a somewhat robotic body. Major, a security officer, sees how a garbage man is sad to know that his ghost has been hacked and filled with false memories of a family, and dives to set up her own reflections with self-identity developed later in the film, especially as she starts to believe that she may be entirely a cyborg with no knowledge of such an existence. Essentially, because the human body has become so thoroughly and regularly augmented with cybernetic parts and even computer brains, defining a real “human” becomes harder and harder.
If robots develop to the point where they can question their own existence as human, does the line between robot and human truly matter? For what reason? Is questioning human existence a fundamentally human trait? Can fake memories contribute to an identity as much as real ones? Is this a dangerous concept, or might it have positive utility? Do you agree with the assessment that “all data is just fantasy,” or an inaccurate abstraction of real life? What kinds of data, then, make up the human identity?