AI (124)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 5 min
- MIT Tech Review
- 2020
The Semantic Scholar is a new AI program which has been trained to read through scientific papers and provide a unique one sentence summary of the paper’s content. The AI has been trained with a large data set focused on learning how to process natural language and summarise it. The ultimate idea is to use technology to help learning and synthesis happen more quickly, especially for figure such as politicians.
- MIT Tech Review
- 2020
-
- 5 min
- MIT Tech Review
- 2020
AI Summarisation
The Semantic Scholar is a new AI program which has been trained to read through scientific papers and provide a unique one sentence summary of the paper’s content. The AI has been trained with a large data set focused on learning how to process natural language and summarise it. The ultimate idea is to use technology to help learning and synthesis happen more quickly, especially for figure such as politicians.
How might this technology cause people to become lazy readers? How does this technology, like many other digital technologies, shorten attention spans? How can it be ensured that algorithms like this do not leave out critical information?
-
- 7 min
- Wired
- 2020
After student members of the University of Miami Employee Student Alliance held a protest on campus, the University of Miami Police Department likely used facial recognition technology in conjunction with video surveillance cameras to track down nine students from the protest and summon them to a meeting with the dean. This incident provided a gateway into the discussion of fairness of facial recognition programs, and how students believe that they should not be deployed on college campuses.
- Wired
- 2020
-
- 7 min
- Wired
- 2020
Facial Recognition Applications on College Campuses
After student members of the University of Miami Employee Student Alliance held a protest on campus, the University of Miami Police Department likely used facial recognition technology in conjunction with video surveillance cameras to track down nine students from the protest and summon them to a meeting with the dean. This incident provided a gateway into the discussion of fairness of facial recognition programs, and how students believe that they should not be deployed on college campuses.
How can facial recognition algorithms interfere with the right of people to protest? When it comes to facial recognition databases, are larger photo repositories better or worse? Does facial recognition and video surveillance have a place on college campuses? How does facial recognition and video surveillance embolden people in power in general?
-
- 5 min
- Wired
- 2021
This narrative describes the recent AI Incident Database launched at the end of 2020, where companies report case studies in which applied machine learning algorithms did not function as intended or caused real-world harm. The goal is to operate in a sense similar to air travel safety report programs; with this database, technological developers can get a sense of how to make algorithms which are more safe and fair while having the incentive to take precautions to stay off the list.
- Wired
- 2021
-
- 5 min
- Wired
- 2021
Don’t End Up on This Artificial Intelligence Hall of Shame
This narrative describes the recent AI Incident Database launched at the end of 2020, where companies report case studies in which applied machine learning algorithms did not function as intended or caused real-world harm. The goal is to operate in a sense similar to air travel safety report programs; with this database, technological developers can get a sense of how to make algorithms which are more safe and fair while having the incentive to take precautions to stay off the list.
What is your opinion on this method of accountability? Is there anything it does not take into account? Is it possible that some machine learning algorithms make mistakes that cannot even be detected by humans? How can this be avoided? How can the inner workings of machine learning algorithms be made more understandable and digestible by the general public?
-
- 3 min
- Kinolab
- 2009
In a distant future after the “Water War” in which much of the natural environment was destroyed and water has become scarce, Asha works as a curator at a museum which displays the former splendor of nature on Earth. She receives a mysterious soil sample which, after digital analysis using a object recognition to take data from the soil, surprisingly contains water.
- Kinolab
- 2009
Digital Environment Analysis
In a distant future after the “Water War” in which much of the natural environment was destroyed and water has become scarce, Asha works as a curator at a museum which displays the former splendor of nature on Earth. She receives a mysterious soil sample which, after digital analysis using a object recognition to take data from the soil, surprisingly contains water.
How can technology be used to gather data on certain environments and aspects of an ecosystem to help them reach their full potential? How should this technology be made accessible to communities all across the world?
-
- 9 min
- Kinolab
- 2013
In the world of this film, Robin Wright plays a fictional version of herself who has allowed herself to be digitized by the film company Miramount Studios in order to be entered into many films without having to actually act in them, becoming digitally immortal in a sense. Once she enters a hallucinogenic mixed reality known as Abrahama City, she agrees to renew the contract with Miramount studios under the panic of her declining mental health and sense of autonomy. This renewed contract will not only allow movies starring her digital likeness to be made, but will also allow people to appear as her.
- Kinolab
- 2013
Dangers of Digital Commodification
In the world of this film, Robin Wright plays a fictional version of herself who has allowed herself to be digitized by the film company Miramount Studios in order to be entered into many films without having to actually act in them, becoming digitally immortal in a sense. Once she enters a hallucinogenic mixed reality known as Abrahama City, she agrees to renew the contract with Miramount studios under the panic of her declining mental health and sense of autonomy. This renewed contract will not only allow movies starring her digital likeness to be made, but will also allow people to appear as her.
When mixed realities make any sort of appearance possible, how do people keep agency over their own likenesses and identities? How can engineers ensure that common human fears, including the fear of aging, do not drive innovations that will ultimately do more harm than good? Should anyone be allowed to give consent for their likeness to be used in any way the new owner sees fit, given how easily people can be coerced, manipulated, or gaslit? How could economic imbalances be further entrenched or established if certain people are allowed to sell their identities or likenesses?
-
- 10 min
- The Washington Post
- 2019
After prolonged discussion on the effect of “bots,” or automated accounts on social networks, interfering with the electoral process in America in 2016, many worries surfaced that something similar could happen in 2020. This article details the shifts in strategy for using bots to manipulate political conversations online, from techniques like Inorganic Coordinated Activity or hashtag hijacking. Overall, some bot manipulation in political discourse is to be expected, but when used effectively these algorithmic tools still have to power to shape conversations to the will of their deployers.
- The Washington Post
- 2019
-
- 10 min
- The Washington Post
- 2019
Are ‘bots’ manipulating the 2020 conversation? Here’s what’s changed since 2016.
After prolonged discussion on the effect of “bots,” or automated accounts on social networks, interfering with the electoral process in America in 2016, many worries surfaced that something similar could happen in 2020. This article details the shifts in strategy for using bots to manipulate political conversations online, from techniques like Inorganic Coordinated Activity or hashtag hijacking. Overall, some bot manipulation in political discourse is to be expected, but when used effectively these algorithmic tools still have to power to shape conversations to the will of their deployers.
How are social media networks architectures that can be manipulated to an individual’s agenda, and how could this be addressed? Should any kind of bot accounts be allowed on Twitter, or do they all have too much negative potential to be trusted? What affordances of social networks allow bad actors to redirect the traffic of these networks? Is the problem of “trends” or “cascades” inherent to social media?