Machine Learning (83)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 5 min
- Wired
- 2021
This narrative describes the recent AI Incident Database launched at the end of 2020, where companies report case studies in which applied machine learning algorithms did not function as intended or caused real-world harm. The goal is to operate in a sense similar to air travel safety report programs; with this database, technological developers can get a sense of how to make algorithms which are more safe and fair while having the incentive to take precautions to stay off the list.
- Wired
- 2021
-
- 5 min
- Wired
- 2021
Don’t End Up on This Artificial Intelligence Hall of Shame
This narrative describes the recent AI Incident Database launched at the end of 2020, where companies report case studies in which applied machine learning algorithms did not function as intended or caused real-world harm. The goal is to operate in a sense similar to air travel safety report programs; with this database, technological developers can get a sense of how to make algorithms which are more safe and fair while having the incentive to take precautions to stay off the list.
What is your opinion on this method of accountability? Is there anything it does not take into account? Is it possible that some machine learning algorithms make mistakes that cannot even be detected by humans? How can this be avoided? How can the inner workings of machine learning algorithms be made more understandable and digestible by the general public?
-
- 7 min
- Chronicle
- 2021
The history of AI contains a pendulum which swings back and forth between two approaches to artificial intelligence; symbolic AI, which tries to replicate human reasoning, and neural networks/deep learning, which try to replicate the human brain.
- Chronicle
- 2021
-
- 7 min
- Chronicle
- 2021
Artificial Intelligence Is a House Divided
The history of AI contains a pendulum which swings back and forth between two approaches to artificial intelligence; symbolic AI, which tries to replicate human reasoning, and neural networks/deep learning, which try to replicate the human brain.
Which approach to AI (symbolic or neural networks) do you believe leads to greater transparency? Which approach to AI do you believe might be more effective in accomplishing a certain goal? Does one approach make you feel more comfortable than the other? How could these two approaches be synthesized, if at all?
-
- 7 min
- Venture Beat
- 2021
As machine learning algorithms become more deeply embedded in all levels of society, including governments, it is critical for developers and users alike to consider how these algorithms may shift or concentrate power, specifically as it relates to biased data. Historical and anthropological lenses are helpful in dissecting AI in terms of how they model the world, and what perspectives might be missing from their construction and operation.
- Venture Beat
- 2021
-
- 7 min
- Venture Beat
- 2021
Center for Applied Data Ethics suggests treating AI like a bureaucracy
As machine learning algorithms become more deeply embedded in all levels of society, including governments, it is critical for developers and users alike to consider how these algorithms may shift or concentrate power, specifically as it relates to biased data. Historical and anthropological lenses are helpful in dissecting AI in terms of how they model the world, and what perspectives might be missing from their construction and operation.
Whose job is it to ameliorate the “privilege hazard”, and how should this be done? How should large data sets be analyzed to avoid bias and ensure fairness? How can large data aggregators such as Google be held accountable to new standards of scrutinizing data and introducing humanities perspectives in applications?
-
- 7 min
- Kinolab
- 2013
In this film, actress Robin Wright plays a fictionalized version of herself as an actress whose popularity is declining. Her agent Al exposes her to deep fake technology which creates a virtual version of an actor to play a role in any number of scenarios or films. These “actors” are 3D holographs with AI that have been trained to replicate the real person which they imitate. However, Robin is disconcerted with the lack of agency that she would have in deciding how her image and identity appeared in these movies.
- Kinolab
- 2013
Digital Performers and the Gift of Choice
In this film, actress Robin Wright plays a fictionalized version of herself as an actress whose popularity is declining. Her agent Al exposes her to deep fake technology which creates a virtual version of an actor to play a role in any number of scenarios or films. These “actors” are 3D holographs with AI that have been trained to replicate the real person which they imitate. However, Robin is disconcerted with the lack of agency that she would have in deciding how her image and identity appeared in these movies.
What sorts of problems are implicated with the ability to manipulate another person’s body and likeness in a piece of media without their consent? Does technology like this actually have the potential to free actors from some of the constraints of the film industry, as Al says? How would acting be valued as an art, and actors paid accordingly and properly, if this technology became the norm?
-
- 5 min
- Gizmodo
- 2021
Thorough investigation led to the conclusion that bots played a role in the economic disruption of GameStop stocks in early 2021. Essentially, the automated accounts aided in the diffusion of materials promoting the purchase and maintenance of GameStop stocks as a ploy to act as a check on wealthy hedge fund managers who bet that the stock would crash. The wholistic effect of these bots in this specific campaign, and thus a measure of how bots may generally be used to cause economic disruption in online markets through interaction with humans, remains hard to read.
- Gizmodo
- 2021
-
- 5 min
- Gizmodo
- 2021
Bots Reportedly Helped Fuel GameStonks Hype on Facebook, Twitter, and Other Platforms
Thorough investigation led to the conclusion that bots played a role in the economic disruption of GameStop stocks in early 2021. Essentially, the automated accounts aided in the diffusion of materials promoting the purchase and maintenance of GameStop stocks as a ploy to act as a check on wealthy hedge fund managers who bet that the stock would crash. The wholistic effect of these bots in this specific campaign, and thus a measure of how bots may generally be used to cause economic disruption in online markets through interaction with humans, remains hard to read.
Do you consider this case study, and the use of the bots, to be “activism”? How can this case study be summarized into a general principle for how bots may manipulate the economy? How do digital technologies help both wealth and non-wealthy people serve their own interests?
- ZDNet
- 2021
Alexa Conversations improves its quality of natural language processing as users feed them sample conversations. This feedback system allows Alexa Conversations to cut costs of training developers and managing related data.
- ZDNet
- 2021
- ZDNet
- 2021
Amazon makes Alexa Conversations generally available
Alexa Conversations improves its quality of natural language processing as users feed them sample conversations. This feedback system allows Alexa Conversations to cut costs of training developers and managing related data.
What are some measures you think technology companies should implement to ensure the protection of users’ privacy? What role do you think the government should play?