Machine Learning (83)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 7 min
- Farnam Street Blog
- 2021
Discusses the main lessons from two recent books explaining how algorithmic bias occurs and how it may be ameliorated. Essentially, algorithms are little more than mathematical operations, but their lack of transparency and the bad, unrepresentative data sets which train them mean their pervasive use becomes dangerous.
- Farnam Street Blog
- 2021
-
- 7 min
- Farnam Street Blog
- 2021
A Primer on Algorithms and Bias
Discusses the main lessons from two recent books explaining how algorithmic bias occurs and how it may be ameliorated. Essentially, algorithms are little more than mathematical operations, but their lack of transparency and the bad, unrepresentative data sets which train them mean their pervasive use becomes dangerous.
How can data sets fed to algorithms be properly verified? What would the most beneficial collaboration between humans and algorithms look like?
-
- 10 min
- The Washington Post
- 2019
After prolonged discussion on the effect of “bots,” or automated accounts on social networks, interfering with the electoral process in America in 2016, many worries surfaced that something similar could happen in 2020. This article details the shifts in strategy for using bots to manipulate political conversations online, from techniques like Inorganic Coordinated Activity or hashtag hijacking. Overall, some bot manipulation in political discourse is to be expected, but when used effectively these algorithmic tools still have to power to shape conversations to the will of their deployers.
- The Washington Post
- 2019
-
- 10 min
- The Washington Post
- 2019
Are ‘bots’ manipulating the 2020 conversation? Here’s what’s changed since 2016.
After prolonged discussion on the effect of “bots,” or automated accounts on social networks, interfering with the electoral process in America in 2016, many worries surfaced that something similar could happen in 2020. This article details the shifts in strategy for using bots to manipulate political conversations online, from techniques like Inorganic Coordinated Activity or hashtag hijacking. Overall, some bot manipulation in political discourse is to be expected, but when used effectively these algorithmic tools still have to power to shape conversations to the will of their deployers.
How are social media networks architectures that can be manipulated to an individual’s agenda, and how could this be addressed? Should any kind of bot accounts be allowed on Twitter, or do they all have too much negative potential to be trusted? What affordances of social networks allow bad actors to redirect the traffic of these networks? Is the problem of “trends” or “cascades” inherent to social media?
-
- 5 min
- Wired
- 2021
A computer vision algorithm created by an MIT PhD student and trained on a large data set of mammogram photos from several years show potential for use in radiology. The algorithm is able to identify risk for breast cancer seemingly more reliably than the older statistical models through tagging the data with attributes that human eyes have missed. This would allow for customization in screening and treatment plans.
- Wired
- 2021
-
- 5 min
- Wired
- 2021
These Doctors are using AI to Screen for Breast Cancer
A computer vision algorithm created by an MIT PhD student and trained on a large data set of mammogram photos from several years show potential for use in radiology. The algorithm is able to identify risk for breast cancer seemingly more reliably than the older statistical models through tagging the data with attributes that human eyes have missed. This would allow for customization in screening and treatment plans.
Do there seem to be any drawbacks to using this technology widely? How important is transparency of the algorithm in this case, as long as it seems to provide accurate results? How might this change the nature of doctor-patient relationships?
-
- 3 min
- CNN
- 2021
The prominence of social data on any given person afforded by digital artifacts, such as social media posts and text messages, can be used to train a new algorithm patented by Microsoft to create a chatbot meant to imitate that specific person. This technology has not been released, however, due to its harrowing ethical implications of impersonation and dissonance. For the Black Mirror episode referenced in the article, see the narratives “Martha and Ash Parts I and II.”
- CNN
- 2021
-
- 3 min
- CNN
- 2021
Microsoft patented a chatbot that would let you talk to dead people. It was too disturbing for production
The prominence of social data on any given person afforded by digital artifacts, such as social media posts and text messages, can be used to train a new algorithm patented by Microsoft to create a chatbot meant to imitate that specific person. This technology has not been released, however, due to its harrowing ethical implications of impersonation and dissonance. For the Black Mirror episode referenced in the article, see the narratives “Martha and Ash Parts I and II.”
How do humans control their identity when it can be replicated through machine learning? What sorts of quirks and mannerisms are unique to humans and cannot be replicated by an algorithm?
-
- 9 min
- Kinolab
- 2013
In the world of this film, Robin Wright plays a fictional version of herself who has allowed herself to be digitized by the film company Miramount Studios in order to be entered into many films without having to actually act in them, becoming digitally immortal in a sense. Once she enters a hallucinogenic mixed reality known as Abrahama City, she agrees to renew the contract with Miramount studios under the panic of her declining mental health and sense of autonomy. This renewed contract will not only allow movies starring her digital likeness to be made, but will also allow people to appear as her.
- Kinolab
- 2013
Dangers of Digital Commodification
In the world of this film, Robin Wright plays a fictional version of herself who has allowed herself to be digitized by the film company Miramount Studios in order to be entered into many films without having to actually act in them, becoming digitally immortal in a sense. Once she enters a hallucinogenic mixed reality known as Abrahama City, she agrees to renew the contract with Miramount studios under the panic of her declining mental health and sense of autonomy. This renewed contract will not only allow movies starring her digital likeness to be made, but will also allow people to appear as her.
When mixed realities make any sort of appearance possible, how do people keep agency over their own likenesses and identities? How can engineers ensure that common human fears, including the fear of aging, do not drive innovations that will ultimately do more harm than good? Should anyone be allowed to give consent for their likeness to be used in any way the new owner sees fit, given how easily people can be coerced, manipulated, or gaslit? How could economic imbalances be further entrenched or established if certain people are allowed to sell their identities or likenesses?
-
- 12 min
- Kinolab
- 1965
The city of Alphaville is under the complete rule of Alpha-60, an omnipresent robot whose knowledge is more vast than that of any human. This robot, whose learning and knowledge model is deemed “too complex for human understanding,” cements its rule through effectively outlawing emotion in Alphaville, with all definitions of consciousness centering on rationality. All words expressing curiosity or emotion are erased from human access, with asking “why” being replaced with saying “because.” Lemmy is a detective who has entered Alphaville from an external land to destroy Alpha-60. However, in their conversation, Alpha-60 is immediately able to suss out the suspicious aspects of Lemmy’s visit and character.
- Kinolab
- 1965
Supercomputer Rule and Condensing Human Behavior
The city of Alphaville is under the complete rule of Alpha-60, an omnipresent robot whose knowledge is more vast than that of any human. This robot, whose learning and knowledge model is deemed “too complex for human understanding,” cements its rule through effectively outlawing emotion in Alphaville, with all definitions of consciousness centering on rationality. All words expressing curiosity or emotion are erased from human access, with asking “why” being replaced with saying “because.” Lemmy is a detective who has entered Alphaville from an external land to destroy Alpha-60. However, in their conversation, Alpha-60 is immediately able to suss out the suspicious aspects of Lemmy’s visit and character.
Can governing computers or machines ever be totally objective? Is this objectivity dangerous? Do humans need emotionality to define themselves and their societies, so that a focus on rationality does not allow for computers to take over rule of our societies? Can the actions of all humans throughout the past be reduced down to a pattern that computers can understand or manipulate? What are the implications of omnipresent technology in city settings?