News Article (130)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 7 min
- CNN
- 2021
The South Korean company Supertone has created a machine learning algorithm which has been able to replicate the voice of beloved singer Kim Kwang-seok, thus performing a new single in his voice even after his death. However, certain ethical questions such as who owns artwork created by AI and how to avoid fraud ought to be addressed before such technology is used more widely.
- CNN
- 2021
-
- 7 min
- CNN
- 2021
South Korea has used AI to bring a dead superstar’s voice back to the stage, but ethical concerns abound
The South Korean company Supertone has created a machine learning algorithm which has been able to replicate the voice of beloved singer Kim Kwang-seok, thus performing a new single in his voice even after his death. However, certain ethical questions such as who owns artwork created by AI and how to avoid fraud ought to be addressed before such technology is used more widely.
How can synthetic media change the legacy of a certain person? Who do you believe should gain ownership of works created by AI? What factors does this depend upon? How might the music industry be changed by such AI? How could human singers compete with artificial ones if AI concerts became the norm?
-
- 7 min
- Venture Beat
- 2021
As machine learning algorithms become more deeply embedded in all levels of society, including governments, it is critical for developers and users alike to consider how these algorithms may shift or concentrate power, specifically as it relates to biased data. Historical and anthropological lenses are helpful in dissecting AI in terms of how they model the world, and what perspectives might be missing from their construction and operation.
- Venture Beat
- 2021
-
- 7 min
- Venture Beat
- 2021
Center for Applied Data Ethics suggests treating AI like a bureaucracy
As machine learning algorithms become more deeply embedded in all levels of society, including governments, it is critical for developers and users alike to consider how these algorithms may shift or concentrate power, specifically as it relates to biased data. Historical and anthropological lenses are helpful in dissecting AI in terms of how they model the world, and what perspectives might be missing from their construction and operation.
Whose job is it to ameliorate the “privilege hazard”, and how should this be done? How should large data sets be analyzed to avoid bias and ensure fairness? How can large data aggregators such as Google be held accountable to new standards of scrutinizing data and introducing humanities perspectives in applications?
-
- 10 min
- The New Yorker
- 2020
This article contextualizes the BLM uprisings of 2020 in a larger trend of using social media and other digital platforms to promote activist causes. A comparison between the benefits of in-person, on-the-ground activism and activism which takes place through social media is considered.
- The New Yorker
- 2020
-
- 10 min
- The New Yorker
- 2020
The Second Act of Social Media Activism
This article contextualizes the BLM uprisings of 2020 in a larger trend of using social media and other digital platforms to promote activist causes. A comparison between the benefits of in-person, on-the-ground activism and activism which takes place through social media is considered.
How should activism in its in-person and online forms be mediated? How does someone become an authority, for information or otherwise, on the internet? What are the benefits and detriments of the decentralization of organization afforded by social media activism?
-
- 10 min
- The Washington Post
- 2021
The academic Philip Agre, a computer scientist by training, wrote several papers warning about the impacts of unfair AI and data barons after spending several years studying the humanities and realizing that these perspectives were missing from the field of computer science and artificial intelligence. These papers were published in the 1990s, long before the data-industrial complex and the normalization of algorithms in the everyday lives of citizens. Although he was an educated whistleblower, his predictions were ultimately ignored, the field of artificial intelligence remaining closed off from outside criticism.
- The Washington Post
- 2021
-
- 10 min
- The Washington Post
- 2021
He predicted the dark side of the Internet 30 years ago. Why did no one listen?
The academic Philip Agre, a computer scientist by training, wrote several papers warning about the impacts of unfair AI and data barons after spending several years studying the humanities and realizing that these perspectives were missing from the field of computer science and artificial intelligence. These papers were published in the 1990s, long before the data-industrial complex and the normalization of algorithms in the everyday lives of citizens. Although he was an educated whistleblower, his predictions were ultimately ignored, the field of artificial intelligence remaining closed off from outside criticism.
Why are humanities perspectives needed in computer science and artificial intelligence fields? What would it take for data barons and/or technology users to listen to the predictions and ethical concerns of whistleblowers?
-
- 7 min
- The New York Times
- 2021
On October 4th, 2021, Facebook’s servers experienced an outage which left its apps, including the commonly used Facebook, Instagram, and Whatsapp, out of commission for several hours. The problem is said to be caused by a incorrect configuring of Facebook’s servers, which ultimately led to a Domain Name System error in which the numerical IP addresses determined by the computer became inaccessible. The myriad effects of this outage spread across the globe as businesses were effected by the lack of access to these social networks. Additionally, certain other internet services linked to Facebook became inaccessible.
- The New York Times
- 2021
-
- 7 min
- The New York Times
- 2021
Facebook and all of its apps go down simultaneously.
On October 4th, 2021, Facebook’s servers experienced an outage which left its apps, including the commonly used Facebook, Instagram, and Whatsapp, out of commission for several hours. The problem is said to be caused by a incorrect configuring of Facebook’s servers, which ultimately led to a Domain Name System error in which the numerical IP addresses determined by the computer became inaccessible. The myriad effects of this outage spread across the globe as businesses were effected by the lack of access to these social networks. Additionally, certain other internet services linked to Facebook became inaccessible.
What are the dangers of relying on fallible networks to perform essential functions such as business? How can network infrastructure be more protected? How much data and information should Facebook be trusted with?
-
- 7 min
- Chronicle
- 2021
The history of AI contains a pendulum which swings back and forth between two approaches to artificial intelligence; symbolic AI, which tries to replicate human reasoning, and neural networks/deep learning, which try to replicate the human brain.
- Chronicle
- 2021
-
- 7 min
- Chronicle
- 2021
Artificial Intelligence Is a House Divided
The history of AI contains a pendulum which swings back and forth between two approaches to artificial intelligence; symbolic AI, which tries to replicate human reasoning, and neural networks/deep learning, which try to replicate the human brain.
Which approach to AI (symbolic or neural networks) do you believe leads to greater transparency? Which approach to AI do you believe might be more effective in accomplishing a certain goal? Does one approach make you feel more comfortable than the other? How could these two approaches be synthesized, if at all?