Data Mining (28)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 7 min
- Wired
- 2019
Internet users should start considering private browsers such as Duckduckgo to promote privacy and prevent personalized search results and ads. Many different pieces of software, including browsers by larger tech companies, are beginning to take this approach of erasing data, blocking outside tracking, or preventing cookies.
- Wired
- 2019
-
- 7 min
- Wired
- 2019
It’s Time to Switch to a Privacy Browser
Internet users should start considering private browsers such as Duckduckgo to promote privacy and prevent personalized search results and ads. Many different pieces of software, including browsers by larger tech companies, are beginning to take this approach of erasing data, blocking outside tracking, or preventing cookies.
Consider if the privacy-oriented browsers described in the article were the default. Whose interests would this work towards? Whose interests would this work against?
-
- 10 min
- The New Yorker
- 2019
Great breakdown of the concerns that come with automating the world without understanding why it works. Provides the principal concerns with the “hidden layer” of artificial neural networks, and how the lack of human understanding of some AI decision making makes these machines susceptible to manipulation.
- The New Yorker
- 2019
-
- 10 min
- The New Yorker
- 2019
The Hidden Costs of Automated Thinking
Great breakdown of the concerns that come with automating the world without understanding why it works. Provides the principal concerns with the “hidden layer” of artificial neural networks, and how the lack of human understanding of some AI decision making makes these machines susceptible to manipulation.
Should we still use technology that we do not have a full understanding of? Might machines play a role in the demise of expertise? How can companies and institutions be held accountable for “lifting the curtain” behind their algorithms?
-
- 30 min
- Wired
- 2019
In China, “supercompanies” such as WeChat or Alipay aggregate massive amounts of varied data on users. The Zhima Credit score system directly influences the agency of users by limiting their options in acting in their environment, or determining with whom they interact. The Chinese government interests itself with allying with large tech companies to incorporate a social ranking system which can be used to control and suppress citizens. Although the United States does not have “supercompanies” like those mentioned from China, the large companies that collect user data in the US certainly have the same potential to limit human agency.
- Wired
- 2019
-
- 30 min
- Wired
- 2019
Inside China’s Vast New Experiment In Social Ranking
In China, “supercompanies” such as WeChat or Alipay aggregate massive amounts of varied data on users. The Zhima Credit score system directly influences the agency of users by limiting their options in acting in their environment, or determining with whom they interact. The Chinese government interests itself with allying with large tech companies to incorporate a social ranking system which can be used to control and suppress citizens. Although the United States does not have “supercompanies” like those mentioned from China, the large companies that collect user data in the US certainly have the same potential to limit human agency.
How does social credit instituted by technology help perpetuate social division? What level of privacy is appropriate when it comes to social standing? Where should the line be drawn in terms of making decisions about people based on their digitally collected data?
-
- 10 min
- MIT Technology Review
- 2020
This article explains the ethical warnings of Timnit Gebru against training Natural Language Processing algorithms on large language models developed on sets of textual data from the internet. Not only does this process have a negative environmental impact, it also still does not allow these machine learning tools to process semantic nuance, especially as it relates to burgeoning social movements or countries with lower internet access. Dr. Gebru’s refusal to retract this paper ultimately lead to her dismissal from Google.
- MIT Technology Review
- 2020
-
- 10 min
- MIT Technology Review
- 2020
We read the paper that forced Timnit Gebru out of Google. Here’s what it says.
This article explains the ethical warnings of Timnit Gebru against training Natural Language Processing algorithms on large language models developed on sets of textual data from the internet. Not only does this process have a negative environmental impact, it also still does not allow these machine learning tools to process semantic nuance, especially as it relates to burgeoning social movements or countries with lower internet access. Dr. Gebru’s refusal to retract this paper ultimately lead to her dismissal from Google.
How should models for training NLP algorithms be more closely scrutinized? What sorts of voices are needed at the design table to ensure that the impact of such algorithms are consistent across all populations? Can this ever be achieved?