AI (143)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 5 min
- MIT Technology Review
- 2020
This article details the reactions to the deepfake documentary In the event of moon disaster.
- MIT Technology Review
- 2020
-
- 5 min
- MIT Technology Review
- 2020
Inside the strange new world of being a deepfake actor
This article details the reactions to the deepfake documentary In the event of moon disaster.
-
- 5 min
- Premium Beat
- 2020
This blog post explores what a combination of deepfake and computer generated images (CGI) technologies might mean to film makers.
- Premium Beat
- 2020
-
- 5 min
- Premium Beat
- 2020
Is Deepfake Technology the Future of the Film Industry?
This blog post explores what a combination of deepfake and computer generated images (CGI) technologies might mean to film makers.
-
- 30 min
- CNET, New York Times, Gizmodo
- 2023
On May 16, 2023, OpenAI CEO Sam Altman testified in front of Congress on the potential harms of AI and how it ought to be regulated in the future, especially concerning new tools such as ChatGPT and voice imitators.
After watching the CNET video of the top moments from the hearing, read the Gizmodo overview of the hearing and read the associated New York Times article last. All resources highlight the need for governmental intervention to hold companies who generate AI products accountable, especially in the wake of a lack of totally effective congressional action on social media companies. While misinformation and deepfake has been a concern among politicians since the advent of social media, additional new concerns such as a new wave of job loss and crediting artists are raised in the hearing.
- CNET, New York Times, Gizmodo
- 2023
The ChatGPT Congressional Hearing
On May 16, 2023, OpenAI CEO Sam Altman testified in front of Congress on the potential harms of AI and how it ought to be regulated in the future, especially concerning new tools such as ChatGPT and voice imitators.
After watching the CNET video of the top moments from the hearing, read the Gizmodo overview of the hearing and read the associated New York Times article last. All resources highlight the need for governmental intervention to hold companies who generate AI products accountable, especially in the wake of a lack of totally effective congressional action on social media companies. While misinformation and deepfake has been a concern among politicians since the advent of social media, additional new concerns such as a new wave of job loss and crediting artists are raised in the hearing.
If you were in the position of the congresspeople in the hearing, what questions would you ask Sam Altman? Does Sam Altman put too much of the onus of ethical regulation on the government? How would the “license” approach apply to AI companies that already exist/have released popular products? Do you believe Congress might still be able to “meet the moment” on AI?
-
- 51 min
- TechCrunch
- 2020
In this podcast, several disability experts discuss the evolving relationship between disabled people, society, and technology. The main point of discussion is the difference between the medical and societal models of disability, and how the medical lens tends to spur technologies with an individual focus on remedying disability, whereas the societal lens could spur technologies that lead to a more accessible world. Artificial Intelligence and machine learning is labelled as inherently “normative” since it is trained on data that comes from a biased society, and therefore is less likely to work in favor of a social group as varied as disabled people. There is a clear need for institutional change in the technology industry to address these problems.
- TechCrunch
- 2020
Artificial Intelligence and Disability
In this podcast, several disability experts discuss the evolving relationship between disabled people, society, and technology. The main point of discussion is the difference between the medical and societal models of disability, and how the medical lens tends to spur technologies with an individual focus on remedying disability, whereas the societal lens could spur technologies that lead to a more accessible world. Artificial Intelligence and machine learning is labelled as inherently “normative” since it is trained on data that comes from a biased society, and therefore is less likely to work in favor of a social group as varied as disabled people. There is a clear need for institutional change in the technology industry to address these problems.
What are some problems with injecting even the most unbiased of technologies into a system biased against certain groups, including disabled people? How can developers aim to create technology which can actually put accessibility before profit? How can it be ensured that AI algorithms take into account more than just normative considerations? How can developers be forced to consider the myriad impacts that one technology may have on large heterogeneous communities such as the disabled community?
-
- 7 min
- Wired
- 2020
As different levels of the U.S government have introduced and passed bills regulating or banning the use of facial recognition technologies, tech monopolies such as Amazon and IBM have become important lobbying agents in these conversations. It seems that most larger groups are on different pages in terms of how exactly face recognition algorithms should be limited or used, especially given their negative impacts on privacy when used for surveillance.
- Wired
- 2020
-
- 7 min
- Wired
- 2020
Congress Is Eyeing Face Recognition, and Companies Want a Say
As different levels of the U.S government have introduced and passed bills regulating or banning the use of facial recognition technologies, tech monopolies such as Amazon and IBM have become important lobbying agents in these conversations. It seems that most larger groups are on different pages in terms of how exactly face recognition algorithms should be limited or used, especially given their negative impacts on privacy when used for surveillance.
Can and should the private sector be regulated in its use of facial recognition technologies? How is it that tech monopolies might hold so much sway with government officials, and how can this be addressed? Do the benefits of facial recognition, such as convenience at the airport, listed at the end of the article make enough of a case against a complete ban of the technology, or do the bad applications ultimately outweigh the good ones? What would the ideal bill look like in terms of limiting or banning facial recognition?