Privacy (134)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 3 min
- MacRumors
- 2021
Facebook’s collaboration with Ray-Ban on a new technology of “smart glasses” comes with a host of questions on whether or not capabilities such as facial recognition should be built into the technology.
- MacRumors
- 2021
-
- 3 min
- MacRumors
- 2021
Facebook Weighing Up Legality of Facial Recognition in Upcoming Smart Glasses
Facebook’s collaboration with Ray-Ban on a new technology of “smart glasses” comes with a host of questions on whether or not capabilities such as facial recognition should be built into the technology.
What are the “so clear” benefits and risks of having facial recognition algorithms implanted into smart glasses, in your view? What are the problems with “transparent technology” such as smart glasses, where other citizens may not even know that they are being surveilled?
-
- 5 min
- Inc
Clubhouse, a new, exclusive social network app which appeared during the coronavirus pandemic, has some frightening data collection practices which are outlined in detail in this article. Essentially, while the company was not monetized at the time of this article, it collects data not only on users on the platform, but also any contacts of that user.
- Inc
-
- 5 min
- Inc
Clubhouse Is Recording Your Conversations. That’s Not Even Its Worst Privacy Problem
Clubhouse, a new, exclusive social network app which appeared during the coronavirus pandemic, has some frightening data collection practices which are outlined in detail in this article. Essentially, while the company was not monetized at the time of this article, it collects data not only on users on the platform, but also any contacts of that user.
What are the consequences of social networks having detailed data on the personal networks of its users? What are the dangers of collecting data by putting many different social networking platforms into conversation with one another? How do draws such as exclusivity pull attention away from irresponsible data mining practices?
-
- 7 min
- New York Times
- 2018
This article details the research of Joy Buolamwini on racial bias coded into algorithms, specifically facial recognition programs. When auditing facial recognition software from several large companies such as IBM and Face++, she found that they are far worse at properly identifying darker skinned faces. Overall, this reveals that facial analysis and recognition programs are in need of exterior systems of accountability.
- New York Times
- 2018
-
- 7 min
- New York Times
- 2018
Facial Recognition Is Accurate, if You’re a White Guy
This article details the research of Joy Buolamwini on racial bias coded into algorithms, specifically facial recognition programs. When auditing facial recognition software from several large companies such as IBM and Face++, she found that they are far worse at properly identifying darker skinned faces. Overall, this reveals that facial analysis and recognition programs are in need of exterior systems of accountability.
What does exterior accountability for facial recognition software look like, and what should it look like? How and why does racial bias get coded into technology, whether explicitly or implicitly?
-
- 7 min
- The Verge
- 2020
PULSE is an algorithm which can supposedly determine what a face looks like from a pixelated image. The problem: more often than not, the algorithm will return a white face, even when the person from the pixelated photograph is a person of color. The algorithm works through creating a synthetic face which matches with the pixel pattern, rather than actually clearing up the image. It is these synthetic faces that demonstrate a clear bias toward white people, demonstrating how institutional racism makes its way thoroughly into technological design. Thus, diversity in data sets will not full help until broader solutions combatting bias are enacted.
- The Verge
- 2020
-
- 7 min
- The Verge
- 2020
What a machine learning tool that turns Obama white can (and can’t) tell us about AI bias
PULSE is an algorithm which can supposedly determine what a face looks like from a pixelated image. The problem: more often than not, the algorithm will return a white face, even when the person from the pixelated photograph is a person of color. The algorithm works through creating a synthetic face which matches with the pixel pattern, rather than actually clearing up the image. It is these synthetic faces that demonstrate a clear bias toward white people, demonstrating how institutional racism makes its way thoroughly into technological design. Thus, diversity in data sets will not full help until broader solutions combatting bias are enacted.
What potential harms could you see from the misapplication of the PULSE algorithm? What sorts of bias-mitigating solutions besides more diverse data sets could you envision? Based on this case study, what sorts of real-world applications should facial recognition technology be trusted with?
-
- 5 min
- Inc
- 2021
On International Data Privacy Day, Apple CEO Tim Cook fired shots against Mark Zuckerberg and Facebook’s model of mining user data through platform analytics and web mining to serve up targeted ads to users. By contrast, Cook painted Apple as a privacy oriented company who wants to make technology work for its users by not collecting their data and manipulating them psychologically through advertising.
- Inc
- 2021
-
- 5 min
- Inc
- 2021
Tim Cook May Have Just Ended Facebook
On International Data Privacy Day, Apple CEO Tim Cook fired shots against Mark Zuckerberg and Facebook’s model of mining user data through platform analytics and web mining to serve up targeted ads to users. By contrast, Cook painted Apple as a privacy oriented company who wants to make technology work for its users by not collecting their data and manipulating them psychologically through advertising.
Are you convinced that Apple has a better business model than Facebook? Should users be responsible for taking steps to protect themselves against web mining, or should Facebook be responsible for adding in more guardrails? What are the consequences of both Facebook and Apple products being involved in larger architectures that extend beyond the singular digital artifact?
-
- 6 min
- Kinolab
- 2019
In an imagined future of London, citizens all across the globe are connected to the Feed, a device and network accessed constantly through a brain-computer interface. Kate Hatfield, a new mother, discovers that someone has hacked into the device in her head, and thus was able to access some of her lived memories. Later, the culprit of this hack is revealed to be her father-in-law Lawrence, who was attempting to implant the Feed into Bea, the new baby.
- Kinolab
- 2019
Consent and Control with Personal Data
In an imagined future of London, citizens all across the globe are connected to the Feed, a device and network accessed constantly through a brain-computer interface. Kate Hatfield, a new mother, discovers that someone has hacked into the device in her head, and thus was able to access some of her lived memories. Later, the culprit of this hack is revealed to be her father-in-law Lawrence, who was attempting to implant the Feed into Bea, the new baby.
What are the dangers that come with ‘backing up’ memory to some type of cloud account? What risks are posed by hackers and corporations that run such backing up services? Is there something special about the transient, temporary nature of human memory that should remain as it is? How much of our privacy are we willing to sacrifice in order for safety/connectivity? How should consent work in terms of installing a brain-computer interface into a person? Should a parent or other family member be able to decide this for a child?