AI (124)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 5 min
- Business Insider
- 2020
This article tells the story of Timnit Gebru, a Google employee who was fired after Google refused to take her research on machine learning and algorithmic bias into full account. She was terminated hastily after sending an email asking Google to meet certain research-based conditions. Gebru is a leading expert in the field of AI and bias.
- Business Insider
- 2020
-
- 5 min
- Business Insider
- 2020
One of Google’s leading AI researchers says she’s been fired in retaliation for an email to other employees
This article tells the story of Timnit Gebru, a Google employee who was fired after Google refused to take her research on machine learning and algorithmic bias into full account. She was terminated hastily after sending an email asking Google to meet certain research-based conditions. Gebru is a leading expert in the field of AI and bias.
How can tech monopolies dismiss recommendations to make their technologies more ethical? How do bias ethicists such as Gebru get onto a more unshakeable platform? Who is going to hold tech monopolies more accountable? Should these monopolies even by trying to fix their current algorithms, or might it be better to just start fresh?
-
- 7 min
- ZDNet
- 2020
Dr. Gary Marcus explains that deep machine learning as it currently exists is not maximizing the potential of AI to collect and process knowledge. He essentially argues that these machine “brains” should have more innate knowledge than they do, similar to how animal brains function in processing an environment. Ideally, this sort of baseline knowledge would be used to collect and process information from “Knowledge graphs,” a semantic web of information available on the internet which can sometimes be hard for an AI to process without translation to machine vocabularies such as RDF.
- ZDNet
- 2020
-
- 7 min
- ZDNet
- 2020
Rebooting AI: Deep learning, meet knowledge graphs
Dr. Gary Marcus explains that deep machine learning as it currently exists is not maximizing the potential of AI to collect and process knowledge. He essentially argues that these machine “brains” should have more innate knowledge than they do, similar to how animal brains function in processing an environment. Ideally, this sort of baseline knowledge would be used to collect and process information from “Knowledge graphs,” a semantic web of information available on the internet which can sometimes be hard for an AI to process without translation to machine vocabularies such as RDF.
Does giving a machine similar learning capabilities to humans and animals bring artificial intelligence closer to singularity? Should humans ultimately be in control of what a machine learns? What is problematic about leaving AI less capable of understanding semantic webs?
-
- 51 min
- TechCrunch
- 2020
In this podcast, several disability experts discuss the evolving relationship between disabled people, society, and technology. The main point of discussion is the difference between the medical and societal models of disability, and how the medical lens tends to spur technologies with an individual focus on remedying disability, whereas the societal lens could spur technologies that lead to a more accessible world. Artificial Intelligence and machine learning is labelled as inherently “normative” since it is trained on data that comes from a biased society, and therefore is less likely to work in favor of a social group as varied as disabled people. There is a clear need for institutional change in the technology industry to address these problems.
- TechCrunch
- 2020
Artificial Intelligence and Disability
In this podcast, several disability experts discuss the evolving relationship between disabled people, society, and technology. The main point of discussion is the difference between the medical and societal models of disability, and how the medical lens tends to spur technologies with an individual focus on remedying disability, whereas the societal lens could spur technologies that lead to a more accessible world. Artificial Intelligence and machine learning is labelled as inherently “normative” since it is trained on data that comes from a biased society, and therefore is less likely to work in favor of a social group as varied as disabled people. There is a clear need for institutional change in the technology industry to address these problems.
What are some problems with injecting even the most unbiased of technologies into a system biased against certain groups, including disabled people? How can developers aim to create technology which can actually put accessibility before profit? How can it be ensured that AI algorithms take into account more than just normative considerations? How can developers be forced to consider the myriad impacts that one technology may have on large heterogeneous communities such as the disabled community?
-
- 5 min
- Wired
- 2020
As means of preserving deceased loved ones digitally become more and more likely, it is critical to consider the implications of technologies which aim to replicate and capture the personality and traits of those who have passed. Not only might this change the natural process of grieving and healing, it may also have alarming consequences for the agency of the dead. For the corresponding Black Mirror episode discussed in the article, see the narratives “Martha and Ash Parts I and II.”
- Wired
- 2020
-
- 5 min
- Wired
- 2020
The Ethics of Rebooting the Dead
As means of preserving deceased loved ones digitally become more and more likely, it is critical to consider the implications of technologies which aim to replicate and capture the personality and traits of those who have passed. Not only might this change the natural process of grieving and healing, it may also have alarming consequences for the agency of the dead. For the corresponding Black Mirror episode discussed in the article, see the narratives “Martha and Ash Parts I and II.”
Should anyone be allowed to use digital resurrection technologies if they feel it may better help them cope? With all the data points that exist for internet users in this day and age, is it easier to create versions of deceased people which are uncannily similar to their real identities? What would be missing from this abstraction? How is a person’s identity kept uniform or recognizable if they are digitally resurrected?
-
- 7 min
- Wired
- 2020
As different levels of the U.S government have introduced and passed bills regulating or banning the use of facial recognition technologies, tech monopolies such as Amazon and IBM have become important lobbying agents in these conversations. It seems that most larger groups are on different pages in terms of how exactly face recognition algorithms should be limited or used, especially given their negative impacts on privacy when used for surveillance.
- Wired
- 2020
-
- 7 min
- Wired
- 2020
Congress Is Eyeing Face Recognition, and Companies Want a Say
As different levels of the U.S government have introduced and passed bills regulating or banning the use of facial recognition technologies, tech monopolies such as Amazon and IBM have become important lobbying agents in these conversations. It seems that most larger groups are on different pages in terms of how exactly face recognition algorithms should be limited or used, especially given their negative impacts on privacy when used for surveillance.
Can and should the private sector be regulated in its use of facial recognition technologies? How is it that tech monopolies might hold so much sway with government officials, and how can this be addressed? Do the benefits of facial recognition, such as convenience at the airport, listed at the end of the article make enough of a case against a complete ban of the technology, or do the bad applications ultimately outweigh the good ones? What would the ideal bill look like in terms of limiting or banning facial recognition?
-
- 5 min
- ZDNet
- 2020
In recent municipal elections in Brazil, the software and hardware of a machine learning technology provided by Oracle failed to properly do its job in counting the votes. This ultimately led to a delay in the results, as the AI had not been properly calibrated beforehand.
- ZDNet
- 2020
-
- 5 min
- ZDNet
- 2020
AI Failure in Elections
In recent municipal elections in Brazil, the software and hardware of a machine learning technology provided by Oracle failed to properly do its job in counting the votes. This ultimately led to a delay in the results, as the AI had not been properly calibrated beforehand.
Who had responsibility to fully test and calibrate this AI before it was used for an election? What sorts of more dire consequences could result from a failure of AI to properly count votes? What are the implications of an American tech monopoly providing this faulty technology to another country’s elections?