Computer Vision (40)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 6 min
- TED
- 2020
Jamila Gordon, an AI activist and the CEO and founder of Lumachain, tells her story as a refugee from Ethiopia to illuminate the great strokes of luck that eventually brought her to her important position in the global tech industry. This makes the strong case for introducing AI into the workplace, as approaches using computer vision can lead to greater safety and machine learning can be applied to help those who may speak a language not dominant in that workplace or culture train and acclimate more effectively.
- TED
- 2020
How AI can help shatter barriers to equality
Jamila Gordon, an AI activist and the CEO and founder of Lumachain, tells her story as a refugee from Ethiopia to illuminate the great strokes of luck that eventually brought her to her important position in the global tech industry. This makes the strong case for introducing AI into the workplace, as approaches using computer vision can lead to greater safety and machine learning can be applied to help those who may speak a language not dominant in that workplace or culture train and acclimate more effectively.
Would constant computer vision surveillance of a workplace be ultimately positive or negative or both? How could it be ensured that machine learning algorithms were only used for positive forces in a workplace? What responsibility to large companies have to help those in less privileged countries access digital fluency?
-
- 51 min
- TechCrunch
- 2020
In this podcast, several disability experts discuss the evolving relationship between disabled people, society, and technology. The main point of discussion is the difference between the medical and societal models of disability, and how the medical lens tends to spur technologies with an individual focus on remedying disability, whereas the societal lens could spur technologies that lead to a more accessible world. Artificial Intelligence and machine learning is labelled as inherently “normative” since it is trained on data that comes from a biased society, and therefore is less likely to work in favor of a social group as varied as disabled people. There is a clear need for institutional change in the technology industry to address these problems.
- TechCrunch
- 2020
Artificial Intelligence and Disability
In this podcast, several disability experts discuss the evolving relationship between disabled people, society, and technology. The main point of discussion is the difference between the medical and societal models of disability, and how the medical lens tends to spur technologies with an individual focus on remedying disability, whereas the societal lens could spur technologies that lead to a more accessible world. Artificial Intelligence and machine learning is labelled as inherently “normative” since it is trained on data that comes from a biased society, and therefore is less likely to work in favor of a social group as varied as disabled people. There is a clear need for institutional change in the technology industry to address these problems.
What are some problems with injecting even the most unbiased of technologies into a system biased against certain groups, including disabled people? How can developers aim to create technology which can actually put accessibility before profit? How can it be ensured that AI algorithms take into account more than just normative considerations? How can developers be forced to consider the myriad impacts that one technology may have on large heterogeneous communities such as the disabled community?
-
- 7 min
- The Verge
- 2019
Reliance on “emotion recognition” algorithms, which use facial analysis to infer feelings. Credibility of the results in question based on inability of machines to recognize abstract nuances.
- The Verge
- 2019
-
- 7 min
- The Verge
- 2019
AI ‘Emotion Recognition’ Can’t Be Trusted
Reliance on “emotion recognition” algorithms, which use facial analysis to infer feelings. Credibility of the results in question based on inability of machines to recognize abstract nuances.
Can digital artifacts potentially detect human emotions correctly? Should our emotions be read by machines? Are emotions too complex for machines to understand? How is human agency impacted by discrete AI categories for emotions?
-
- 7 min
- The New York Times
- 2019
ICE, along with other law enforcement agencies, mined state driver’s license databases using facial recognition tech to track down undocumented immigrants and prosecute more cases.
- The New York Times
- 2019
-
- 7 min
- The New York Times
- 2019
ICE Used Facial Recognition to Mine State Driver’s License Database
ICE, along with other law enforcement agencies, mined state driver’s license databases using facial recognition tech to track down undocumented immigrants and prosecute more cases.
What responsibility do DMVs across the country have to protect the privacy of citizens? What levels of bias (human and machine) are discussed in this story? Given that, can AI ever be unbiased in both functionality and use?
-
- 5 min
- CNN
- 2010
Algorithms and machines can struggle with facial recognition, and need ideal source images to perform it consistently. However, its potential use in monitoring and identifying citizens is concerning.
- CNN
- 2010
-
- 5 min
- CNN
- 2010
Why face recognition isn’t scary — yet
Algorithms and machines can struggle with facial recognition, and need ideal source images to perform it consistently. However, its potential use in monitoring and identifying citizens is concerning.
How have the worries regarding facial recognition changed since 2010? Can we teach machines to identify human faces? How can facial recognition pose a danger/worry when use for governmental purposes?
-
- 10 min
- The Washington Post
- 2019
Law enforcement officials at Federal and state levels, notably the FBI and ICE, use state driver’s license photo databases as a repository for facial recognition software. Such capabilities allow DMVs to help law enforcement in finding those suspected of a crime, undocumented immigrants, or even witnesses. Ultimately, states allow this to happen with certain stipulations, feeding into a concerning system of facial recognition and breach of trust. There is not a solid established system for citizen consent to such monitoring.
- The Washington Post
- 2019
-
- 10 min
- The Washington Post
- 2019
FBI, ICE find state driver’s license photos are a gold mine for facial-recognition searches
Law enforcement officials at Federal and state levels, notably the FBI and ICE, use state driver’s license photo databases as a repository for facial recognition software. Such capabilities allow DMVs to help law enforcement in finding those suspected of a crime, undocumented immigrants, or even witnesses. Ultimately, states allow this to happen with certain stipulations, feeding into a concerning system of facial recognition and breach of trust. There is not a solid established system for citizen consent to such monitoring.
Does this case study of facial recognition make the US seem like a surveillance state or not? How can and should average citizens have more agency in DMV databases being used for facial recognition? Can the government use any digital surveillance in a way that does not breach citizen trust?