Biometrics (35)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 7 min
- Slate
- 2019
Discussion of Facebook’s massive collection of human faces and their potential impact on society.
- Slate
- 2019
-
- 7 min
- Slate
- 2019
Facebook’s Face-ID Database Could Be the Biggest in the World. Yes, It Should Worry Us.
Discussion of Facebook’s massive collection of human faces and their potential impact on society.
Is Facebook’s facial recognition database benign, or a slow-bubbling volcano?
-
- 5 min
- Gizmodo
- 2021
Customs and Border protection used facial recognition technology to scan travelers entering the U.S at several points of entry in 2020, and did not identify any impostors or impersonators. This is part of a larger program of using biometrics to screen those who enter the country, which raises concerns about data privacy, who may have access to this data, and how it may be used.
- Gizmodo
- 2021
-
- 5 min
- Gizmodo
- 2021
CBP Facial Recognition Scanners Failed to Find a Single Imposter At Airports in 2020
Customs and Border protection used facial recognition technology to scan travelers entering the U.S at several points of entry in 2020, and did not identify any impostors or impersonators. This is part of a larger program of using biometrics to screen those who enter the country, which raises concerns about data privacy, who may have access to this data, and how it may be used.
What bad outcomes are possible from the government having extensive biometric data, including facial scans, on many people who try to enter the country? Why does the government get away with using biased technology to conduct facial scans at airports, for example? Are “facilitation improvements” worth aiming for if it means using technologies that are not 100% effective and will disproportionately harm certain populations?
-
- 10 min
- The Atlantic
- 2014
When the Apple Health app first released, it lacked one crucial component: the ability to track menstrual cycles. This exclusion of women from accessible design of technology is not the exception but rather the rule. This results from problems inherent to the gender imbalance in technology workplaces, especially at the level of design. Communities such as the Quantified Self offer spaces to help combat this exclusive culture.
- The Atlantic
- 2014
-
- 10 min
- The Atlantic
- 2014
How Self-Tracking Apps Exclude Women
When the Apple Health app first released, it lacked one crucial component: the ability to track menstrual cycles. This exclusion of women from accessible design of technology is not the exception but rather the rule. This results from problems inherent to the gender imbalance in technology workplaces, especially at the level of design. Communities such as the Quantified Self offer spaces to help combat this exclusive culture.
In what ways are women being left behind by personal data tracking apps, and how can this be fixed? How can design strategies and institutions in technology development be inherently sexist? What will it take to ensure glaring omissions such as this one do not occur in other future products? How can apps that track and promote certain behaviors avoid being patronizing or patriarchal?
-
- 7 min
- The Verge
- 2020
PULSE is an algorithm which can supposedly determine what a face looks like from a pixelated image. The problem: more often than not, the algorithm will return a white face, even when the person from the pixelated photograph is a person of color. The algorithm works through creating a synthetic face which matches with the pixel pattern, rather than actually clearing up the image. It is these synthetic faces that demonstrate a clear bias toward white people, demonstrating how institutional racism makes its way thoroughly into technological design. Thus, diversity in data sets will not full help until broader solutions combatting bias are enacted.
- The Verge
- 2020
-
- 7 min
- The Verge
- 2020
What a machine learning tool that turns Obama white can (and can’t) tell us about AI bias
PULSE is an algorithm which can supposedly determine what a face looks like from a pixelated image. The problem: more often than not, the algorithm will return a white face, even when the person from the pixelated photograph is a person of color. The algorithm works through creating a synthetic face which matches with the pixel pattern, rather than actually clearing up the image. It is these synthetic faces that demonstrate a clear bias toward white people, demonstrating how institutional racism makes its way thoroughly into technological design. Thus, diversity in data sets will not full help until broader solutions combatting bias are enacted.
What potential harms could you see from the misapplication of the PULSE algorithm? What sorts of bias-mitigating solutions besides more diverse data sets could you envision? Based on this case study, what sorts of real-world applications should facial recognition technology be trusted with?
-
- 7 min
- New York Times
- 2018
This article details the research of Joy Buolamwini on racial bias coded into algorithms, specifically facial recognition programs. When auditing facial recognition software from several large companies such as IBM and Face++, she found that they are far worse at properly identifying darker skinned faces. Overall, this reveals that facial analysis and recognition programs are in need of exterior systems of accountability.
- New York Times
- 2018
-
- 7 min
- New York Times
- 2018
Facial Recognition Is Accurate, if You’re a White Guy
This article details the research of Joy Buolamwini on racial bias coded into algorithms, specifically facial recognition programs. When auditing facial recognition software from several large companies such as IBM and Face++, she found that they are far worse at properly identifying darker skinned faces. Overall, this reveals that facial analysis and recognition programs are in need of exterior systems of accountability.
What does exterior accountability for facial recognition software look like, and what should it look like? How and why does racial bias get coded into technology, whether explicitly or implicitly?
-
- 3 min
- MacRumors
- 2021
Facebook’s collaboration with Ray-Ban on a new technology of “smart glasses” comes with a host of questions on whether or not capabilities such as facial recognition should be built into the technology.
- MacRumors
- 2021
-
- 3 min
- MacRumors
- 2021
Facebook Weighing Up Legality of Facial Recognition in Upcoming Smart Glasses
Facebook’s collaboration with Ray-Ban on a new technology of “smart glasses” comes with a host of questions on whether or not capabilities such as facial recognition should be built into the technology.
What are the “so clear” benefits and risks of having facial recognition algorithms implanted into smart glasses, in your view? What are the problems with “transparent technology” such as smart glasses, where other citizens may not even know that they are being surveilled?