AI (124)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 6 min
- Wired
- 2019
Spreading of harmful content through Youtube’s AI recommendation engine algorithm. AI helps create filter bubbles and echo chambers. Limited user agency to be exposed to certain content.
- Wired
- 2019
-
- 6 min
- Wired
- 2019
The Toxic Potential of YouTube’s Feedback Loop
Spreading of harmful content through Youtube’s AI recommendation engine algorithm. AI helps create filter bubbles and echo chambers. Limited user agency to be exposed to certain content.
How much agency do we have over the content we are shown in our digital artifacts? Who decides this? How skeptical should we be of recommender systems?
-
- 7 min
- The New Republic
- 2020
The narrative of Dr. Timnit Gebru’s termination from Google is inextricably bound with Google’s irresponsible practices with training data for its machine learning algorithms. Using large data sets to train Natural Language Processing algorithms is ultimately a harmful practice because for all the harms to the environment and biases against certain languages it causes, machines still cannot fully comprehend human language.
- The New Republic
- 2020
-
- 7 min
- The New Republic
- 2020
Who Gets a Say in Our Dystopian Tech Future?
The narrative of Dr. Timnit Gebru’s termination from Google is inextricably bound with Google’s irresponsible practices with training data for its machine learning algorithms. Using large data sets to train Natural Language Processing algorithms is ultimately a harmful practice because for all the harms to the environment and biases against certain languages it causes, machines still cannot fully comprehend human language.
Should machines be trusted to handle and process the incredibly nuanced meaning of human language? How do different understandings of what languages and words mean and represent become harmful when a minority of people are deciding how to train NLP algorithms? How do tech monopolies prevent more diverse voices from entering this conversation?
-
- 5 min
- Wired
- 2021
A computer vision algorithm created by an MIT PhD student and trained on a large data set of mammogram photos from several years show potential for use in radiology. The algorithm is able to identify risk for breast cancer seemingly more reliably than the older statistical models through tagging the data with attributes that human eyes have missed. This would allow for customization in screening and treatment plans.
- Wired
- 2021
-
- 5 min
- Wired
- 2021
These Doctors are using AI to Screen for Breast Cancer
A computer vision algorithm created by an MIT PhD student and trained on a large data set of mammogram photos from several years show potential for use in radiology. The algorithm is able to identify risk for breast cancer seemingly more reliably than the older statistical models through tagging the data with attributes that human eyes have missed. This would allow for customization in screening and treatment plans.
Do there seem to be any drawbacks to using this technology widely? How important is transparency of the algorithm in this case, as long as it seems to provide accurate results? How might this change the nature of doctor-patient relationships?
-
- 3 min
- CNN
- 2021
The prominence of social data on any given person afforded by digital artifacts, such as social media posts and text messages, can be used to train a new algorithm patented by Microsoft to create a chatbot meant to imitate that specific person. This technology has not been released, however, due to its harrowing ethical implications of impersonation and dissonance. For the Black Mirror episode referenced in the article, see the narratives “Martha and Ash Parts I and II.”
- CNN
- 2021
-
- 3 min
- CNN
- 2021
Microsoft patented a chatbot that would let you talk to dead people. It was too disturbing for production
The prominence of social data on any given person afforded by digital artifacts, such as social media posts and text messages, can be used to train a new algorithm patented by Microsoft to create a chatbot meant to imitate that specific person. This technology has not been released, however, due to its harrowing ethical implications of impersonation and dissonance. For the Black Mirror episode referenced in the article, see the narratives “Martha and Ash Parts I and II.”
How do humans control their identity when it can be replicated through machine learning? What sorts of quirks and mannerisms are unique to humans and cannot be replicated by an algorithm?
-
- 7 min
- CNN
- 2021
The South Korean company Supertone has created a machine learning algorithm which has been able to replicate the voice of beloved singer Kim Kwang-seok, thus performing a new single in his voice even after his death. However, certain ethical questions such as who owns artwork created by AI and how to avoid fraud ought to be addressed before such technology is used more widely.
- CNN
- 2021
-
- 7 min
- CNN
- 2021
South Korea has used AI to bring a dead superstar’s voice back to the stage, but ethical concerns abound
The South Korean company Supertone has created a machine learning algorithm which has been able to replicate the voice of beloved singer Kim Kwang-seok, thus performing a new single in his voice even after his death. However, certain ethical questions such as who owns artwork created by AI and how to avoid fraud ought to be addressed before such technology is used more widely.
How can synthetic media change the legacy of a certain person? Who do you believe should gain ownership of works created by AI? What factors does this depend upon? How might the music industry be changed by such AI? How could human singers compete with artificial ones if AI concerts became the norm?
-
- 7 min
- Venture Beat
- 2021
As machine learning algorithms become more deeply embedded in all levels of society, including governments, it is critical for developers and users alike to consider how these algorithms may shift or concentrate power, specifically as it relates to biased data. Historical and anthropological lenses are helpful in dissecting AI in terms of how they model the world, and what perspectives might be missing from their construction and operation.
- Venture Beat
- 2021
-
- 7 min
- Venture Beat
- 2021
Center for Applied Data Ethics suggests treating AI like a bureaucracy
As machine learning algorithms become more deeply embedded in all levels of society, including governments, it is critical for developers and users alike to consider how these algorithms may shift or concentrate power, specifically as it relates to biased data. Historical and anthropological lenses are helpful in dissecting AI in terms of how they model the world, and what perspectives might be missing from their construction and operation.
Whose job is it to ameliorate the “privilege hazard”, and how should this be done? How should large data sets be analyzed to avoid bias and ensure fairness? How can large data aggregators such as Google be held accountable to new standards of scrutinizing data and introducing humanities perspectives in applications?