Privacy (134)
Find narratives by ethical themes or by technologies.
FILTERreset filters- Wired
- 2021
Youtube algorithm’s struggle to distinguish chess-related terms from hate speech and abuse has revealed shortcomings in artificial intelligence’s ability to moderate online hate speech. The incident reflects the need to develop digital technologies capable of processing natural languages with a sufficient degree of social sensitivity.
- Wired
- 2021
- Wired
- 2021
Why a YouTube Chat About Chess Got Flagged for Hate Speech
Youtube algorithm’s struggle to distinguish chess-related terms from hate speech and abuse has revealed shortcomings in artificial intelligence’s ability to moderate online hate speech. The incident reflects the need to develop digital technologies capable of processing natural languages with a sufficient degree of social sensitivity.
Where do you draw the line between freedom of speech and online community conduct and regulations? What are some problems you think AI will experience in moderating hate speech like slurs?
-
- 40 min
- New York Times
- 2021
As facial recognition technology becomes more prominent in everyday life, used by players such as law enforcement officials and private actors to identify faces by comparing them with databases, AI ethicists/experts such as Joy Buolamwini push back against the many forms of bias that these technologies show, specifically racial and gender bias. Governments often use such technologies callously or irresponsibly, and lack of regulation on the private companies which sell these products could lead society into a post-privacy era.
- New York Times
- 2021
She’s Taking Jeff Bezos to Task
As facial recognition technology becomes more prominent in everyday life, used by players such as law enforcement officials and private actors to identify faces by comparing them with databases, AI ethicists/experts such as Joy Buolamwini push back against the many forms of bias that these technologies show, specifically racial and gender bias. Governments often use such technologies callously or irresponsibly, and lack of regulation on the private companies which sell these products could lead society into a post-privacy era.
Do you envision an FDA-style approach to technology regulation, particularly for facial recognition, being effective? Can large tech companies be incentivized to make truly ethical decisions on how their technology is created or deployed as long as the profit motive exists? What would this look like? What changes to the technology workforces, such as who designs software products or who chooses data sets, need to be made for technology’s impact to become more equal across populations?
-
- 10 min
- Slate
- 2021
Using the tale of Art History Professor François-Marc Gagnon, whose video lectures were used to instruct students even after his death, this article raises questions about how technologies such as digital memory and data streaming for education in the time of coronavirus may ultimately undervalue the work of educators.
- Slate
- 2021
-
- 10 min
- Slate
- 2021
How a Dead Professor Is Teaching a University Art History Class
Using the tale of Art History Professor François-Marc Gagnon, whose video lectures were used to instruct students even after his death, this article raises questions about how technologies such as digital memory and data streaming for education in the time of coronavirus may ultimately undervalue the work of educators.
What are the largest possible detriments to automating teaching, both for students and for educators? If large amounts of data from a given course or discipline were used to train an AI to teach a course, what would such a program do well, and what aspects of education would be missed? How can educators have more personal control over the digital traces of their teaching? At what point might broader access to educational materials through digital networks actually harm certain groups of people?
-
- 40 min
- New York Times Magazine
- 2021
This article goes into extraordinary detail on the company Clearview AI, a company whose algorithm has crawled the public web to provide over 3 billion photos of faces with links that travel to the original source of each photo. Discusses the legality and privacy concerns of this technology, how the technology has already been used by law enforcement and in court cases, and the founding of the company. Private use of technology similar to that of Clearview AI could revolutionize society and may move us to the post-privacy era.
- New York Times Magazine
- 2021
-
- 40 min
- New York Times Magazine
- 2021
Your Face Is Not Your Own
This article goes into extraordinary detail on the company Clearview AI, a company whose algorithm has crawled the public web to provide over 3 billion photos of faces with links that travel to the original source of each photo. Discusses the legality and privacy concerns of this technology, how the technology has already been used by law enforcement and in court cases, and the founding of the company. Private use of technology similar to that of Clearview AI could revolutionize society and may move us to the post-privacy era.
Should companies like Clearview AI exist? How would facial recognition be misused by both authorities and the general public if it were to permeate all aspects of life?
-
- 7 min
- Slate
- 2021
A new law passed unanimously in Maine heavily restricts the contexts in which facial recognition technology can be deployed, putting significant guardrails around how it is used by law enforcement. Also, it allows citizens to sue if they believe the technology has been misused. This is a unique step in a time when several levels of government, all the way up to the federal government, are less likely to attach strict rules to the use of facial recognition technology, despite the clear bias that is seen in the wake of its use.
- Slate
- 2021
-
- 7 min
- Slate
- 2021
Maine Now Has the Toughest Facial Recognition Restrictions in the U.S.
A new law passed unanimously in Maine heavily restricts the contexts in which facial recognition technology can be deployed, putting significant guardrails around how it is used by law enforcement. Also, it allows citizens to sue if they believe the technology has been misused. This is a unique step in a time when several levels of government, all the way up to the federal government, are less likely to attach strict rules to the use of facial recognition technology, despite the clear bias that is seen in the wake of its use.
How can tech companies do even more to lobby for stricter facial recognition regulation? Is a moratorium on facial recognition use by all levels of government the best plan? Why or why not? Does creating “more diverse datasets” truly solve all the problems of bias with the technology?
-
- 7 min
- Amnesty International
- 2021
Amnesty International released a statement detailing its opposition of widespread use of facial recognition technology for mass surveillance purposes based on its misuse and unfair impacts over Black communities and the chilling effect which it would create on peaceful protest.
- Amnesty International
- 2021
-
- 7 min
- Amnesty International
- 2021
AMNESTY INTERNATIONAL CALLS FOR BAN ON THE USE OF FACIAL RECOGNITION TECHNOLOGY FOR MASS SURVEILLANCE
Amnesty International released a statement detailing its opposition of widespread use of facial recognition technology for mass surveillance purposes based on its misuse and unfair impacts over Black communities and the chilling effect which it would create on peaceful protest.
Is more accurate facial recognition technology a good thing or a bad thing? How would FRT be weaponized to justify policing policies that are already unfair toward Black communities? Why is anonymity important, both in protest scenarios and elsewhere? Can anyone be anonymous in the age of digital technology? What amount of anonymity is appropriate?