All Narratives (328)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 51 min
- TechCrunch
- 2020
In this podcast, several disability experts discuss the evolving relationship between disabled people, society, and technology. The main point of discussion is the difference between the medical and societal models of disability, and how the medical lens tends to spur technologies with an individual focus on remedying disability, whereas the societal lens could spur technologies that lead to a more accessible world. Artificial Intelligence and machine learning is labelled as inherently “normative” since it is trained on data that comes from a biased society, and therefore is less likely to work in favor of a social group as varied as disabled people. There is a clear need for institutional change in the technology industry to address these problems.
- TechCrunch
- 2020
Artificial Intelligence and Disability
In this podcast, several disability experts discuss the evolving relationship between disabled people, society, and technology. The main point of discussion is the difference between the medical and societal models of disability, and how the medical lens tends to spur technologies with an individual focus on remedying disability, whereas the societal lens could spur technologies that lead to a more accessible world. Artificial Intelligence and machine learning is labelled as inherently “normative” since it is trained on data that comes from a biased society, and therefore is less likely to work in favor of a social group as varied as disabled people. There is a clear need for institutional change in the technology industry to address these problems.
What are some problems with injecting even the most unbiased of technologies into a system biased against certain groups, including disabled people? How can developers aim to create technology which can actually put accessibility before profit? How can it be ensured that AI algorithms take into account more than just normative considerations? How can developers be forced to consider the myriad impacts that one technology may have on large heterogeneous communities such as the disabled community?
-
- 5 min
- Tech Crunch
- 2020
During Google’s attempt to merge with the company Fitbit, the NGO Amnesty International has provided warnings to the competition regulators in the EU that such a move would be detrimental to privacy. Based on Google’s historical malpractice with user data, since its status as a tech monopoly allows it to mine data from several different avenues of a user’s life, adding wearable health-based tech to this equation puts the privacy and rights of users at risk. Calls for scrunity of “surveillance capitalism” employed by tech giants.
- Tech Crunch
- 2020
-
- 5 min
- Tech Crunch
- 2020
No Google-Fitbit merger without human rights remedies, says Amnesty to EU
During Google’s attempt to merge with the company Fitbit, the NGO Amnesty International has provided warnings to the competition regulators in the EU that such a move would be detrimental to privacy. Based on Google’s historical malpractice with user data, since its status as a tech monopoly allows it to mine data from several different avenues of a user’s life, adding wearable health-based tech to this equation puts the privacy and rights of users at risk. Calls for scrunity of “surveillance capitalism” employed by tech giants.
When considering how companies and advertisers may use them, what sorts of personal statistics related to health and well-being should and should not be collected by mobile computing devices? How can devices originally built to stand on their own as one technological artifact become more convenient or harmful to a user when they become part of a technological architecture?
-
- 5 min
- Wired
- 2020
As means of preserving deceased loved ones digitally become more and more likely, it is critical to consider the implications of technologies which aim to replicate and capture the personality and traits of those who have passed. Not only might this change the natural process of grieving and healing, it may also have alarming consequences for the agency of the dead. For the corresponding Black Mirror episode discussed in the article, see the narratives “Martha and Ash Parts I and II.”
- Wired
- 2020
-
- 5 min
- Wired
- 2020
The Ethics of Rebooting the Dead
As means of preserving deceased loved ones digitally become more and more likely, it is critical to consider the implications of technologies which aim to replicate and capture the personality and traits of those who have passed. Not only might this change the natural process of grieving and healing, it may also have alarming consequences for the agency of the dead. For the corresponding Black Mirror episode discussed in the article, see the narratives “Martha and Ash Parts I and II.”
Should anyone be allowed to use digital resurrection technologies if they feel it may better help them cope? With all the data points that exist for internet users in this day and age, is it easier to create versions of deceased people which are uncannily similar to their real identities? What would be missing from this abstraction? How is a person’s identity kept uniform or recognizable if they are digitally resurrected?
-
- 7 min
- Kinolab
- 2017
Downsizing, the procedure which shrinks people down to only a few inches, is invented to combat environmental harm by producing less waste.
- Kinolab
- 2017
Thought Experiment in Technological Environmentalism
Downsizing, the procedure which shrinks people down to only a few inches, is invented to combat environmental harm by producing less waste.
Should the technology to edit humans be created and used if it means saving the planet? How would consent factor into this? Are there any other workable technologies for aiding the environment through means such as ameliorating climate change? Should technological innovation be directed toward first identifying problems and then encompassing the most radical solution possible?
-
- 5 min
- Kinolab
- 2013
Actress Robin Wright plays a fictionalized version of herself who traverses through both the real world and the mixed reality of Abrahama city in this narrative. As Miramount Studio animator Dylan explains to her, the rules of the mixed reality allow people to appear as an avatar which they please, editing their human features into more imaginative ones. With this capability, many people choose to remain in the mixed reality permanently, leaving the real world in a grim stupor.
- Kinolab
- 2013
Removed from Reality
Actress Robin Wright plays a fictionalized version of herself who traverses through both the real world and the mixed reality of Abrahama city in this narrative. As Miramount Studio animator Dylan explains to her, the rules of the mixed reality allow people to appear as an avatar which they please, editing their human features into more imaginative ones. With this capability, many people choose to remain in the mixed reality permanently, leaving the real world in a grim stupor.
Who has a responsibility to ensure that mixed and virtual realities are not tantalizing enough to absolve humans from the responsibility for caring for the real world? How can addiction to digital realities be ameliorated? What issues of identity and presentation to others arise from the capability to appear however one pleases? How is this empowering, and how is this dangerous?
-
- 6 min
- TED
- 2020
Jamila Gordon, an AI activist and the CEO and founder of Lumachain, tells her story as a refugee from Ethiopia to illuminate the great strokes of luck that eventually brought her to her important position in the global tech industry. This makes the strong case for introducing AI into the workplace, as approaches using computer vision can lead to greater safety and machine learning can be applied to help those who may speak a language not dominant in that workplace or culture train and acclimate more effectively.
- TED
- 2020
How AI can help shatter barriers to equality
Jamila Gordon, an AI activist and the CEO and founder of Lumachain, tells her story as a refugee from Ethiopia to illuminate the great strokes of luck that eventually brought her to her important position in the global tech industry. This makes the strong case for introducing AI into the workplace, as approaches using computer vision can lead to greater safety and machine learning can be applied to help those who may speak a language not dominant in that workplace or culture train and acclimate more effectively.
Would constant computer vision surveillance of a workplace be ultimately positive or negative or both? How could it be ensured that machine learning algorithms were only used for positive forces in a workplace? What responsibility to large companies have to help those in less privileged countries access digital fluency?