Themes (326)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 3 min
- CNBC
- 2013
Facial recognition software, or using computer vision and biometric technology on an image of a person to identify them, has potential applications in law enforcement to help catch suspects or criminals. However, aspects of probability are at play, especially as the photos or videos captured become blurrier and need an additional layer of software analysis to be “de-pixelized.” Also, identification depends on the databases to which the FBI has access.
- CNBC
- 2013
-
- 3 min
- CNBC
- 2013
How Facial Recognition Technology Could Help Catch Criminals
Facial recognition software, or using computer vision and biometric technology on an image of a person to identify them, has potential applications in law enforcement to help catch suspects or criminals. However, aspects of probability are at play, especially as the photos or videos captured become blurrier and need an additional layer of software analysis to be “de-pixelized.” Also, identification depends on the databases to which the FBI has access.
How should law enforcement balance training these facial recognition programs with good amounts of quality data and avoiding breaching privacy by accessing more databases with citizen faces? Where can human bias enter into the human-computer systems described in the article? Should there be any margin of error or aspect of probability in technologies that work in volatile areas like law enforcement?
-
- 10 min
- MIT Technology Review
- 2020
This article explains the ethical warnings of Timnit Gebru against training Natural Language Processing algorithms on large language models developed on sets of textual data from the internet. Not only does this process have a negative environmental impact, it also still does not allow these machine learning tools to process semantic nuance, especially as it relates to burgeoning social movements or countries with lower internet access. Dr. Gebru’s refusal to retract this paper ultimately lead to her dismissal from Google.
- MIT Technology Review
- 2020
-
- 10 min
- MIT Technology Review
- 2020
We read the paper that forced Timnit Gebru out of Google. Here’s what it says.
This article explains the ethical warnings of Timnit Gebru against training Natural Language Processing algorithms on large language models developed on sets of textual data from the internet. Not only does this process have a negative environmental impact, it also still does not allow these machine learning tools to process semantic nuance, especially as it relates to burgeoning social movements or countries with lower internet access. Dr. Gebru’s refusal to retract this paper ultimately lead to her dismissal from Google.
How should models for training NLP algorithms be more closely scrutinized? What sorts of voices are needed at the design table to ensure that the impact of such algorithms are consistent across all populations? Can this ever be achieved?
-
- 7 min
- VentureBeat
- 2021
The GPT-3 Natural Language Processing model, created by the company open AI and released in 2020, is the most powerful of its kind, using a generalized approach to feed its machine learning algorithm in order to mirror human speech. The potential applications of such a powerful program are manifold, but this potential means that many tech monopolies may want to enter an “arms race” to get the most powerful model possible.
- VentureBeat
- 2021
-
- 7 min
- VentureBeat
- 2021
GPT-3: We’re at the very beginning of a new app ecosystem
The GPT-3 Natural Language Processing model, created by the company open AI and released in 2020, is the most powerful of its kind, using a generalized approach to feed its machine learning algorithm in order to mirror human speech. The potential applications of such a powerful program are manifold, but this potential means that many tech monopolies may want to enter an “arms race” to get the most powerful model possible.
Should AI be able to imitate human speech unchecked? Should humans be trained to be able to tell when speech or text might be produced by a machine? How might Natural Language Processing cheapen human writing and writing jobs?
-
- 5 min
- Gizmodo
- 2021
Thorough investigation led to the conclusion that bots played a role in the economic disruption of GameStop stocks in early 2021. Essentially, the automated accounts aided in the diffusion of materials promoting the purchase and maintenance of GameStop stocks as a ploy to act as a check on wealthy hedge fund managers who bet that the stock would crash. The wholistic effect of these bots in this specific campaign, and thus a measure of how bots may generally be used to cause economic disruption in online markets through interaction with humans, remains hard to read.
- Gizmodo
- 2021
-
- 5 min
- Gizmodo
- 2021
Bots Reportedly Helped Fuel GameStonks Hype on Facebook, Twitter, and Other Platforms
Thorough investigation led to the conclusion that bots played a role in the economic disruption of GameStop stocks in early 2021. Essentially, the automated accounts aided in the diffusion of materials promoting the purchase and maintenance of GameStop stocks as a ploy to act as a check on wealthy hedge fund managers who bet that the stock would crash. The wholistic effect of these bots in this specific campaign, and thus a measure of how bots may generally be used to cause economic disruption in online markets through interaction with humans, remains hard to read.
Do you consider this case study, and the use of the bots, to be “activism”? How can this case study be summarized into a general principle for how bots may manipulate the economy? How do digital technologies help both wealth and non-wealthy people serve their own interests?
-
- 3 min
- MacRumors
- 2021
Facebook’s collaboration with Ray-Ban on a new technology of “smart glasses” comes with a host of questions on whether or not capabilities such as facial recognition should be built into the technology.
- MacRumors
- 2021
-
- 3 min
- MacRumors
- 2021
Facebook Weighing Up Legality of Facial Recognition in Upcoming Smart Glasses
Facebook’s collaboration with Ray-Ban on a new technology of “smart glasses” comes with a host of questions on whether or not capabilities such as facial recognition should be built into the technology.
What are the “so clear” benefits and risks of having facial recognition algorithms implanted into smart glasses, in your view? What are the problems with “transparent technology” such as smart glasses, where other citizens may not even know that they are being surveilled?
-
- 5 min
- Inc
Clubhouse, a new, exclusive social network app which appeared during the coronavirus pandemic, has some frightening data collection practices which are outlined in detail in this article. Essentially, while the company was not monetized at the time of this article, it collects data not only on users on the platform, but also any contacts of that user.
- Inc
-
- 5 min
- Inc
Clubhouse Is Recording Your Conversations. That’s Not Even Its Worst Privacy Problem
Clubhouse, a new, exclusive social network app which appeared during the coronavirus pandemic, has some frightening data collection practices which are outlined in detail in this article. Essentially, while the company was not monetized at the time of this article, it collects data not only on users on the platform, but also any contacts of that user.
What are the consequences of social networks having detailed data on the personal networks of its users? What are the dangers of collecting data by putting many different social networking platforms into conversation with one another? How do draws such as exclusivity pull attention away from irresponsible data mining practices?