Computer Vision (40)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 7 min
- Slate
- 2021
A new law passed unanimously in Maine heavily restricts the contexts in which facial recognition technology can be deployed, putting significant guardrails around how it is used by law enforcement. Also, it allows citizens to sue if they believe the technology has been misused. This is a unique step in a time when several levels of government, all the way up to the federal government, are less likely to attach strict rules to the use of facial recognition technology, despite the clear bias that is seen in the wake of its use.
- Slate
- 2021
-
- 7 min
- Slate
- 2021
Maine Now Has the Toughest Facial Recognition Restrictions in the U.S.
A new law passed unanimously in Maine heavily restricts the contexts in which facial recognition technology can be deployed, putting significant guardrails around how it is used by law enforcement. Also, it allows citizens to sue if they believe the technology has been misused. This is a unique step in a time when several levels of government, all the way up to the federal government, are less likely to attach strict rules to the use of facial recognition technology, despite the clear bias that is seen in the wake of its use.
How can tech companies do even more to lobby for stricter facial recognition regulation? Is a moratorium on facial recognition use by all levels of government the best plan? Why or why not? Does creating “more diverse datasets” truly solve all the problems of bias with the technology?
-
- 40 min
- New York Times Magazine
- 2021
This article goes into extraordinary detail on the company Clearview AI, a company whose algorithm has crawled the public web to provide over 3 billion photos of faces with links that travel to the original source of each photo. Discusses the legality and privacy concerns of this technology, how the technology has already been used by law enforcement and in court cases, and the founding of the company. Private use of technology similar to that of Clearview AI could revolutionize society and may move us to the post-privacy era.
- New York Times Magazine
- 2021
-
- 40 min
- New York Times Magazine
- 2021
Your Face Is Not Your Own
This article goes into extraordinary detail on the company Clearview AI, a company whose algorithm has crawled the public web to provide over 3 billion photos of faces with links that travel to the original source of each photo. Discusses the legality and privacy concerns of this technology, how the technology has already been used by law enforcement and in court cases, and the founding of the company. Private use of technology similar to that of Clearview AI could revolutionize society and may move us to the post-privacy era.
Should companies like Clearview AI exist? How would facial recognition be misused by both authorities and the general public if it were to permeate all aspects of life?
-
- 40 min
- New York Times
- 2021
As facial recognition technology becomes more prominent in everyday life, used by players such as law enforcement officials and private actors to identify faces by comparing them with databases, AI ethicists/experts such as Joy Buolamwini push back against the many forms of bias that these technologies show, specifically racial and gender bias. Governments often use such technologies callously or irresponsibly, and lack of regulation on the private companies which sell these products could lead society into a post-privacy era.
- New York Times
- 2021
She’s Taking Jeff Bezos to Task
As facial recognition technology becomes more prominent in everyday life, used by players such as law enforcement officials and private actors to identify faces by comparing them with databases, AI ethicists/experts such as Joy Buolamwini push back against the many forms of bias that these technologies show, specifically racial and gender bias. Governments often use such technologies callously or irresponsibly, and lack of regulation on the private companies which sell these products could lead society into a post-privacy era.
Do you envision an FDA-style approach to technology regulation, particularly for facial recognition, being effective? Can large tech companies be incentivized to make truly ethical decisions on how their technology is created or deployed as long as the profit motive exists? What would this look like? What changes to the technology workforces, such as who designs software products or who chooses data sets, need to be made for technology’s impact to become more equal across populations?
-
- 10 min
- MIT Media Labs
- 2021
This is an MIT research project. All data for research is collected anonymously for research purposes.
This project will show you a variety of media snippets including transcripts, audio files, and videos. Sometimes, we include subtitles. Sometimes, the video is silent. You can watch the videos as many times as you would like. The site will ask you to share how confident you are that the individual really said what we show. If you have seen the video before today, please select the checkbox that says “I’ve already seen this video.” And remember, half of the media snippets that are presented are statements that the individual actually said. Read more about this project and the dataset used to produce this research on the About page
https://detectfakes.media.mit.edu/about
- MIT Media Labs
- 2021
-
- 10 min
- MIT Media Labs
- 2021
Detect DeepFakes
This is an MIT research project. All data for research is collected anonymously for research purposes.
This project will show you a variety of media snippets including transcripts, audio files, and videos. Sometimes, we include subtitles. Sometimes, the video is silent. You can watch the videos as many times as you would like. The site will ask you to share how confident you are that the individual really said what we show. If you have seen the video before today, please select the checkbox that says “I’ve already seen this video.” And remember, half of the media snippets that are presented are statements that the individual actually said. Read more about this project and the dataset used to produce this research on the About page
https://detectfakes.media.mit.edu/about
- What about these videos is so disturbing?
- How can we be convinced to not trust our own judgement about the information being presented?
-
- 33 min
- Scientific American
- 2020
In conjunction with Scientific American, this thirty minute documentary brings the film In Event of Moon Disaster to a group of experts on AI, digital privacy, law, and human rights to gauge their reaction on the film, and to provide context on this new technology—its perils, its potential, and the possibilities of this brave, new digital world, where every pixel that moves past our collective eyes is potentially up for grabs.
- Scientific American
- 2020
To Make a DeepFake
In conjunction with Scientific American, this thirty minute documentary brings the film In Event of Moon Disaster to a group of experts on AI, digital privacy, law, and human rights to gauge their reaction on the film, and to provide context on this new technology—its perils, its potential, and the possibilities of this brave, new digital world, where every pixel that moves past our collective eyes is potentially up for grabs.
- What are some of the new dangers this technology brings that are different from other forms of media disinformation in the past?
-
- 5 min
- MIT Technology Review
- 2020
This article details the reactions to the deepfake documentary In the event of moon disaster.
- MIT Technology Review
- 2020
-
- 5 min
- MIT Technology Review
- 2020
Inside the strange new world of being a deepfake actor
This article details the reactions to the deepfake documentary In the event of moon disaster.