AI (124)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 10 min
- Engadget
- 2021
This article provides an excerpt from a book detailing the “Brooksian Revolution,” a movement in the 1980s pressing the idea that the “intelligence” of AI should start from a foundation of acute awareness of its environment, rather than “typical” indicators of intelligence such as pure logic or problem solving. By principle, a reasoning machine-learning loop that operates off of a one-time perception of its environment is inherently disconnected from its environment.
- Engadget
- 2021
-
- 10 min
- Engadget
- 2021
Hitting the Books: The Brooksian revolution that led to rational robots
This article provides an excerpt from a book detailing the “Brooksian Revolution,” a movement in the 1980s pressing the idea that the “intelligence” of AI should start from a foundation of acute awareness of its environment, rather than “typical” indicators of intelligence such as pure logic or problem solving. By principle, a reasoning machine-learning loop that operates off of a one-time perception of its environment is inherently disconnected from its environment.
Why is an environment important to cognition, both that of humans and machines? Will robots ever be able to abstract the world, or model it, in the same way that the human brain can? Are there dangers to robots being strictly “rational” and decoupled from their environments? Are there dangers to robots being too connected to their environments?
-
- 40 min
- New York Times
- 2021
As facial recognition technology becomes more prominent in everyday life, used by players such as law enforcement officials and private actors to identify faces by comparing them with databases, AI ethicists/experts such as Joy Buolamwini push back against the many forms of bias that these technologies show, specifically racial and gender bias. Governments often use such technologies callously or irresponsibly, and lack of regulation on the private companies which sell these products could lead society into a post-privacy era.
- New York Times
- 2021
She’s Taking Jeff Bezos to Task
As facial recognition technology becomes more prominent in everyday life, used by players such as law enforcement officials and private actors to identify faces by comparing them with databases, AI ethicists/experts such as Joy Buolamwini push back against the many forms of bias that these technologies show, specifically racial and gender bias. Governments often use such technologies callously or irresponsibly, and lack of regulation on the private companies which sell these products could lead society into a post-privacy era.
Do you envision an FDA-style approach to technology regulation, particularly for facial recognition, being effective? Can large tech companies be incentivized to make truly ethical decisions on how their technology is created or deployed as long as the profit motive exists? What would this look like? What changes to the technology workforces, such as who designs software products or who chooses data sets, need to be made for technology’s impact to become more equal across populations?
-
- 40 min
- New York Times Magazine
- 2021
This article goes into extraordinary detail on the company Clearview AI, a company whose algorithm has crawled the public web to provide over 3 billion photos of faces with links that travel to the original source of each photo. Discusses the legality and privacy concerns of this technology, how the technology has already been used by law enforcement and in court cases, and the founding of the company. Private use of technology similar to that of Clearview AI could revolutionize society and may move us to the post-privacy era.
- New York Times Magazine
- 2021
-
- 40 min
- New York Times Magazine
- 2021
Your Face Is Not Your Own
This article goes into extraordinary detail on the company Clearview AI, a company whose algorithm has crawled the public web to provide over 3 billion photos of faces with links that travel to the original source of each photo. Discusses the legality and privacy concerns of this technology, how the technology has already been used by law enforcement and in court cases, and the founding of the company. Private use of technology similar to that of Clearview AI could revolutionize society and may move us to the post-privacy era.
Should companies like Clearview AI exist? How would facial recognition be misused by both authorities and the general public if it were to permeate all aspects of life?
-
- 7 min
- Amnesty International
- 2021
Amnesty International released a statement detailing its opposition of widespread use of facial recognition technology for mass surveillance purposes based on its misuse and unfair impacts over Black communities and the chilling effect which it would create on peaceful protest.
- Amnesty International
- 2021
-
- 7 min
- Amnesty International
- 2021
AMNESTY INTERNATIONAL CALLS FOR BAN ON THE USE OF FACIAL RECOGNITION TECHNOLOGY FOR MASS SURVEILLANCE
Amnesty International released a statement detailing its opposition of widespread use of facial recognition technology for mass surveillance purposes based on its misuse and unfair impacts over Black communities and the chilling effect which it would create on peaceful protest.
Is more accurate facial recognition technology a good thing or a bad thing? How would FRT be weaponized to justify policing policies that are already unfair toward Black communities? Why is anonymity important, both in protest scenarios and elsewhere? Can anyone be anonymous in the age of digital technology? What amount of anonymity is appropriate?
-
- 6 min
- CBS News
- 2021
In light of the recent allegations of Facebook whistleblower Frances Haugen that the platform irresponsibly breeds division and mental health issues, AI Specialist Karen Hao explains how Facebook’s “algorithm(s)” serve or fail the people who use them. Specifically, the profit motive and a lack of exact and comprehensive knowledge of the algorithm system prevents groundbreaking change from being made.
- CBS News
- 2021
Facebook algorithm called into question after whistleblower testimony calls it dangerous
In light of the recent allegations of Facebook whistleblower Frances Haugen that the platform irresponsibly breeds division and mental health issues, AI Specialist Karen Hao explains how Facebook’s “algorithm(s)” serve or fail the people who use them. Specifically, the profit motive and a lack of exact and comprehensive knowledge of the algorithm system prevents groundbreaking change from being made.
Do programmers and other technological minds have a responsibility to understand exactly how algorithms work and how they tag data? What are specific consequences to algorithms which use their own criteria to tag items? How do social media networks take advantage of human attention?
-
- 5 min
- CNET
- 2019
Fight for the Future, a digital activist group, used Amazon’s Rekognition facial recognition software to scan faces on the street in Washington DC to show that there should be more guardrails on the use of this type of technology, before it is deployed for ends which violate human rights such as identifying peaceful protestors.
- CNET
- 2019
-
- 5 min
- CNET
- 2019
Demonstrators scan public faces in DC to show lack of facial recognition laws
Fight for the Future, a digital activist group, used Amazon’s Rekognition facial recognition software to scan faces on the street in Washington DC to show that there should be more guardrails on the use of this type of technology, before it is deployed for ends which violate human rights such as identifying peaceful protestors.
Does this kind of stunt seem effective at getting the attention of the public on the ways that facial recognition can be misused? How? Who decides what is a “positive” use of facial recognition technology, and how can these use cases be negotiated with those citizens who want their privacy protected?