Natural Language Interfaces (15)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 12 min
- Kinolab
- 1965
The city of Alphaville is under the complete rule of Alpha-60, an omnipresent robot whose knowledge is more vast than that of any human. This robot, whose learning and knowledge model is deemed “too complex for human understanding,” cements its rule through effectively outlawing emotion in Alphaville, with all definitions of consciousness centering on rationality. All words expressing curiosity or emotion are erased from human access, with asking “why” being replaced with saying “because.” Lemmy is a detective who has entered Alphaville from an external land to destroy Alpha-60. However, in their conversation, Alpha-60 is immediately able to suss out the suspicious aspects of Lemmy’s visit and character.
- Kinolab
- 1965
Supercomputer Rule and Condensing Human Behavior
The city of Alphaville is under the complete rule of Alpha-60, an omnipresent robot whose knowledge is more vast than that of any human. This robot, whose learning and knowledge model is deemed “too complex for human understanding,” cements its rule through effectively outlawing emotion in Alphaville, with all definitions of consciousness centering on rationality. All words expressing curiosity or emotion are erased from human access, with asking “why” being replaced with saying “because.” Lemmy is a detective who has entered Alphaville from an external land to destroy Alpha-60. However, in their conversation, Alpha-60 is immediately able to suss out the suspicious aspects of Lemmy’s visit and character.
Can governing computers or machines ever be totally objective? Is this objectivity dangerous? Do humans need emotionality to define themselves and their societies, so that a focus on rationality does not allow for computers to take over rule of our societies? Can the actions of all humans throughout the past be reduced down to a pattern that computers can understand or manipulate? What are the implications of omnipresent technology in city settings?
-
- 5 min
- MIT Tech Review
- 2020
The Semantic Scholar is a new AI program which has been trained to read through scientific papers and provide a unique one sentence summary of the paper’s content. The AI has been trained with a large data set focused on learning how to process natural language and summarise it. The ultimate idea is to use technology to help learning and synthesis happen more quickly, especially for figure such as politicians.
- MIT Tech Review
- 2020
-
- 5 min
- MIT Tech Review
- 2020
AI Summarisation
The Semantic Scholar is a new AI program which has been trained to read through scientific papers and provide a unique one sentence summary of the paper’s content. The AI has been trained with a large data set focused on learning how to process natural language and summarise it. The ultimate idea is to use technology to help learning and synthesis happen more quickly, especially for figure such as politicians.
How might this technology cause people to become lazy readers? How does this technology, like many other digital technologies, shorten attention spans? How can it be ensured that algorithms like this do not leave out critical information?
-
- 7 min
- The New Republic
- 2020
The narrative of Dr. Timnit Gebru’s termination from Google is inextricably bound with Google’s irresponsible practices with training data for its machine learning algorithms. Using large data sets to train Natural Language Processing algorithms is ultimately a harmful practice because for all the harms to the environment and biases against certain languages it causes, machines still cannot fully comprehend human language.
- The New Republic
- 2020
-
- 7 min
- The New Republic
- 2020
Who Gets a Say in Our Dystopian Tech Future?
The narrative of Dr. Timnit Gebru’s termination from Google is inextricably bound with Google’s irresponsible practices with training data for its machine learning algorithms. Using large data sets to train Natural Language Processing algorithms is ultimately a harmful practice because for all the harms to the environment and biases against certain languages it causes, machines still cannot fully comprehend human language.
Should machines be trusted to handle and process the incredibly nuanced meaning of human language? How do different understandings of what languages and words mean and represent become harmful when a minority of people are deciding how to train NLP algorithms? How do tech monopolies prevent more diverse voices from entering this conversation?
-
- 7 min
- VentureBeat
- 2021
New research and code was released in early 2021 to demonstrate that the training data for Natural Language Processing algorithms is not as robust as it could be. The project, Robustness Gym, allows researchers and computer scientists to approach training data with more scrutiny, organizing this data and testing the results of preliminary runs through the algorithm to see what can be improved upon and how.
- VentureBeat
- 2021
-
- 7 min
- VentureBeat
- 2021
Salesforce researchers release framework to test NLP model robustness
New research and code was released in early 2021 to demonstrate that the training data for Natural Language Processing algorithms is not as robust as it could be. The project, Robustness Gym, allows researchers and computer scientists to approach training data with more scrutiny, organizing this data and testing the results of preliminary runs through the algorithm to see what can be improved upon and how.
What does “robustness” in a natural language processing algorithm mean to you? Should machines always be taught to automatically associate certain words or terms? What are the consequences of large corporations not using the most robust training data for their NLP algorithms?
-
- 7 min
- CNN
- 2021
The South Korean company Supertone has created a machine learning algorithm which has been able to replicate the voice of beloved singer Kim Kwang-seok, thus performing a new single in his voice even after his death. However, certain ethical questions such as who owns artwork created by AI and how to avoid fraud ought to be addressed before such technology is used more widely.
- CNN
- 2021
-
- 7 min
- CNN
- 2021
South Korea has used AI to bring a dead superstar’s voice back to the stage, but ethical concerns abound
The South Korean company Supertone has created a machine learning algorithm which has been able to replicate the voice of beloved singer Kim Kwang-seok, thus performing a new single in his voice even after his death. However, certain ethical questions such as who owns artwork created by AI and how to avoid fraud ought to be addressed before such technology is used more widely.
How can synthetic media change the legacy of a certain person? Who do you believe should gain ownership of works created by AI? What factors does this depend upon? How might the music industry be changed by such AI? How could human singers compete with artificial ones if AI concerts became the norm?
-
- 7 min
- VentureBeat
- 2021
The GPT-3 Natural Language Processing model, created by the company open AI and released in 2020, is the most powerful of its kind, using a generalized approach to feed its machine learning algorithm in order to mirror human speech. The potential applications of such a powerful program are manifold, but this potential means that many tech monopolies may want to enter an “arms race” to get the most powerful model possible.
- VentureBeat
- 2021
-
- 7 min
- VentureBeat
- 2021
GPT-3: We’re at the very beginning of a new app ecosystem
The GPT-3 Natural Language Processing model, created by the company open AI and released in 2020, is the most powerful of its kind, using a generalized approach to feed its machine learning algorithm in order to mirror human speech. The potential applications of such a powerful program are manifold, but this potential means that many tech monopolies may want to enter an “arms race” to get the most powerful model possible.
Should AI be able to imitate human speech unchecked? Should humans be trained to be able to tell when speech or text might be produced by a machine? How might Natural Language Processing cheapen human writing and writing jobs?