Machine Learning (84)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 5 min
- MIT Tech Review
- 2020
The Semantic Scholar is a new AI program which has been trained to read through scientific papers and provide a unique one sentence summary of the paper’s content. The AI has been trained with a large data set focused on learning how to process natural language and summarise it. The ultimate idea is to use technology to help learning and synthesis happen more quickly, especially for figure such as politicians.
- MIT Tech Review
- 2020
-
- 5 min
- MIT Tech Review
- 2020
AI Summarisation
The Semantic Scholar is a new AI program which has been trained to read through scientific papers and provide a unique one sentence summary of the paper’s content. The AI has been trained with a large data set focused on learning how to process natural language and summarise it. The ultimate idea is to use technology to help learning and synthesis happen more quickly, especially for figure such as politicians.
How might this technology cause people to become lazy readers? How does this technology, like many other digital technologies, shorten attention spans? How can it be ensured that algorithms like this do not leave out critical information?
-
- 10 min
- The Washington Post
- 2021
The academic Philip Agre, a computer scientist by training, wrote several papers warning about the impacts of unfair AI and data barons after spending several years studying the humanities and realizing that these perspectives were missing from the field of computer science and artificial intelligence. These papers were published in the 1990s, long before the data-industrial complex and the normalization of algorithms in the everyday lives of citizens. Although he was an educated whistleblower, his predictions were ultimately ignored, the field of artificial intelligence remaining closed off from outside criticism.
- The Washington Post
- 2021
-
- 10 min
- The Washington Post
- 2021
He predicted the dark side of the Internet 30 years ago. Why did no one listen?
The academic Philip Agre, a computer scientist by training, wrote several papers warning about the impacts of unfair AI and data barons after spending several years studying the humanities and realizing that these perspectives were missing from the field of computer science and artificial intelligence. These papers were published in the 1990s, long before the data-industrial complex and the normalization of algorithms in the everyday lives of citizens. Although he was an educated whistleblower, his predictions were ultimately ignored, the field of artificial intelligence remaining closed off from outside criticism.
Why are humanities perspectives needed in computer science and artificial intelligence fields? What would it take for data barons and/or technology users to listen to the predictions and ethical concerns of whistleblowers?
-
- 7 min
- CNN
- 2021
The South Korean company Supertone has created a machine learning algorithm which has been able to replicate the voice of beloved singer Kim Kwang-seok, thus performing a new single in his voice even after his death. However, certain ethical questions such as who owns artwork created by AI and how to avoid fraud ought to be addressed before such technology is used more widely.
- CNN
- 2021
-
- 7 min
- CNN
- 2021
South Korea has used AI to bring a dead superstar’s voice back to the stage, but ethical concerns abound
The South Korean company Supertone has created a machine learning algorithm which has been able to replicate the voice of beloved singer Kim Kwang-seok, thus performing a new single in his voice even after his death. However, certain ethical questions such as who owns artwork created by AI and how to avoid fraud ought to be addressed before such technology is used more widely.
How can synthetic media change the legacy of a certain person? Who do you believe should gain ownership of works created by AI? What factors does this depend upon? How might the music industry be changed by such AI? How could human singers compete with artificial ones if AI concerts became the norm?
-
- 9 min
- Kinolab
- 2013
In the world of this film, Robin Wright plays a fictional version of herself who has allowed herself to be digitized by the film company Miramount Studios in order to be entered into many films without having to actually act in them, becoming digitally immortal in a sense. Once she enters a hallucinogenic mixed reality known as Abrahama City, she agrees to renew the contract with Miramount studios under the panic of her declining mental health and sense of autonomy. This renewed contract will not only allow movies starring her digital likeness to be made, but will also allow people to appear as her.
- Kinolab
- 2013
Dangers of Digital Commodification
In the world of this film, Robin Wright plays a fictional version of herself who has allowed herself to be digitized by the film company Miramount Studios in order to be entered into many films without having to actually act in them, becoming digitally immortal in a sense. Once she enters a hallucinogenic mixed reality known as Abrahama City, she agrees to renew the contract with Miramount studios under the panic of her declining mental health and sense of autonomy. This renewed contract will not only allow movies starring her digital likeness to be made, but will also allow people to appear as her.
When mixed realities make any sort of appearance possible, how do people keep agency over their own likenesses and identities? How can engineers ensure that common human fears, including the fear of aging, do not drive innovations that will ultimately do more harm than good? Should anyone be allowed to give consent for their likeness to be used in any way the new owner sees fit, given how easily people can be coerced, manipulated, or gaslit? How could economic imbalances be further entrenched or established if certain people are allowed to sell their identities or likenesses?
-
- 7 min
- VentureBeat
- 2021
The GPT-3 Natural Language Processing model, created by the company open AI and released in 2020, is the most powerful of its kind, using a generalized approach to feed its machine learning algorithm in order to mirror human speech. The potential applications of such a powerful program are manifold, but this potential means that many tech monopolies may want to enter an “arms race” to get the most powerful model possible.
- VentureBeat
- 2021
-
- 7 min
- VentureBeat
- 2021
GPT-3: We’re at the very beginning of a new app ecosystem
The GPT-3 Natural Language Processing model, created by the company open AI and released in 2020, is the most powerful of its kind, using a generalized approach to feed its machine learning algorithm in order to mirror human speech. The potential applications of such a powerful program are manifold, but this potential means that many tech monopolies may want to enter an “arms race” to get the most powerful model possible.
Should AI be able to imitate human speech unchecked? Should humans be trained to be able to tell when speech or text might be produced by a machine? How might Natural Language Processing cheapen human writing and writing jobs?
-
- 30 min
- CNET, New York Times, Gizmodo
- 2023
On May 16, 2023, OpenAI CEO Sam Altman testified in front of Congress on the potential harms of AI and how it ought to be regulated in the future, especially concerning new tools such as ChatGPT and voice imitators.
After watching the CNET video of the top moments from the hearing, read the Gizmodo overview of the hearing and read the associated New York Times article last. All resources highlight the need for governmental intervention to hold companies who generate AI products accountable, especially in the wake of a lack of totally effective congressional action on social media companies. While misinformation and deepfake has been a concern among politicians since the advent of social media, additional new concerns such as a new wave of job loss and crediting artists are raised in the hearing.
- CNET, New York Times, Gizmodo
- 2023
The ChatGPT Congressional Hearing
On May 16, 2023, OpenAI CEO Sam Altman testified in front of Congress on the potential harms of AI and how it ought to be regulated in the future, especially concerning new tools such as ChatGPT and voice imitators.
After watching the CNET video of the top moments from the hearing, read the Gizmodo overview of the hearing and read the associated New York Times article last. All resources highlight the need for governmental intervention to hold companies who generate AI products accountable, especially in the wake of a lack of totally effective congressional action on social media companies. While misinformation and deepfake has been a concern among politicians since the advent of social media, additional new concerns such as a new wave of job loss and crediting artists are raised in the hearing.
If you were in the position of the congresspeople in the hearing, what questions would you ask Sam Altman? Does Sam Altman put too much of the onus of ethical regulation on the government? How would the “license” approach apply to AI companies that already exist/have released popular products? Do you believe Congress might still be able to “meet the moment” on AI?