Machine Learning (83)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 5 min
- Business Insider
- 2020
This article tells the story of Timnit Gebru, a Google employee who was fired after Google refused to take her research on machine learning and algorithmic bias into full account. She was terminated hastily after sending an email asking Google to meet certain research-based conditions. Gebru is a leading expert in the field of AI and bias.
- Business Insider
- 2020
-
- 5 min
- Business Insider
- 2020
One of Google’s leading AI researchers says she’s been fired in retaliation for an email to other employees
This article tells the story of Timnit Gebru, a Google employee who was fired after Google refused to take her research on machine learning and algorithmic bias into full account. She was terminated hastily after sending an email asking Google to meet certain research-based conditions. Gebru is a leading expert in the field of AI and bias.
How can tech monopolies dismiss recommendations to make their technologies more ethical? How do bias ethicists such as Gebru get onto a more unshakeable platform? Who is going to hold tech monopolies more accountable? Should these monopolies even by trying to fix their current algorithms, or might it be better to just start fresh?
-
- 7 min
- ZDNet
- 2020
Dr. Gary Marcus explains that deep machine learning as it currently exists is not maximizing the potential of AI to collect and process knowledge. He essentially argues that these machine “brains” should have more innate knowledge than they do, similar to how animal brains function in processing an environment. Ideally, this sort of baseline knowledge would be used to collect and process information from “Knowledge graphs,” a semantic web of information available on the internet which can sometimes be hard for an AI to process without translation to machine vocabularies such as RDF.
- ZDNet
- 2020
-
- 7 min
- ZDNet
- 2020
Rebooting AI: Deep learning, meet knowledge graphs
Dr. Gary Marcus explains that deep machine learning as it currently exists is not maximizing the potential of AI to collect and process knowledge. He essentially argues that these machine “brains” should have more innate knowledge than they do, similar to how animal brains function in processing an environment. Ideally, this sort of baseline knowledge would be used to collect and process information from “Knowledge graphs,” a semantic web of information available on the internet which can sometimes be hard for an AI to process without translation to machine vocabularies such as RDF.
Does giving a machine similar learning capabilities to humans and animals bring artificial intelligence closer to singularity? Should humans ultimately be in control of what a machine learns? What is problematic about leaving AI less capable of understanding semantic webs?
-
- 51 min
- TechCrunch
- 2020
In this podcast, several disability experts discuss the evolving relationship between disabled people, society, and technology. The main point of discussion is the difference between the medical and societal models of disability, and how the medical lens tends to spur technologies with an individual focus on remedying disability, whereas the societal lens could spur technologies that lead to a more accessible world. Artificial Intelligence and machine learning is labelled as inherently “normative” since it is trained on data that comes from a biased society, and therefore is less likely to work in favor of a social group as varied as disabled people. There is a clear need for institutional change in the technology industry to address these problems.
- TechCrunch
- 2020
Artificial Intelligence and Disability
In this podcast, several disability experts discuss the evolving relationship between disabled people, society, and technology. The main point of discussion is the difference between the medical and societal models of disability, and how the medical lens tends to spur technologies with an individual focus on remedying disability, whereas the societal lens could spur technologies that lead to a more accessible world. Artificial Intelligence and machine learning is labelled as inherently “normative” since it is trained on data that comes from a biased society, and therefore is less likely to work in favor of a social group as varied as disabled people. There is a clear need for institutional change in the technology industry to address these problems.
What are some problems with injecting even the most unbiased of technologies into a system biased against certain groups, including disabled people? How can developers aim to create technology which can actually put accessibility before profit? How can it be ensured that AI algorithms take into account more than just normative considerations? How can developers be forced to consider the myriad impacts that one technology may have on large heterogeneous communities such as the disabled community?
-
- 5 min
- Wired
- 2020
As means of preserving deceased loved ones digitally become more and more likely, it is critical to consider the implications of technologies which aim to replicate and capture the personality and traits of those who have passed. Not only might this change the natural process of grieving and healing, it may also have alarming consequences for the agency of the dead. For the corresponding Black Mirror episode discussed in the article, see the narratives “Martha and Ash Parts I and II.”
- Wired
- 2020
-
- 5 min
- Wired
- 2020
The Ethics of Rebooting the Dead
As means of preserving deceased loved ones digitally become more and more likely, it is critical to consider the implications of technologies which aim to replicate and capture the personality and traits of those who have passed. Not only might this change the natural process of grieving and healing, it may also have alarming consequences for the agency of the dead. For the corresponding Black Mirror episode discussed in the article, see the narratives “Martha and Ash Parts I and II.”
Should anyone be allowed to use digital resurrection technologies if they feel it may better help them cope? With all the data points that exist for internet users in this day and age, is it easier to create versions of deceased people which are uncannily similar to their real identities? What would be missing from this abstraction? How is a person’s identity kept uniform or recognizable if they are digitally resurrected?
-
- 5 min
- ZDNet
- 2020
In recent municipal elections in Brazil, the software and hardware of a machine learning technology provided by Oracle failed to properly do its job in counting the votes. This ultimately led to a delay in the results, as the AI had not been properly calibrated beforehand.
- ZDNet
- 2020
-
- 5 min
- ZDNet
- 2020
AI Failure in Elections
In recent municipal elections in Brazil, the software and hardware of a machine learning technology provided by Oracle failed to properly do its job in counting the votes. This ultimately led to a delay in the results, as the AI had not been properly calibrated beforehand.
Who had responsibility to fully test and calibrate this AI before it was used for an election? What sorts of more dire consequences could result from a failure of AI to properly count votes? What are the implications of an American tech monopoly providing this faulty technology to another country’s elections?
-
- 5 min
- MIT Tech Review
- 2020
The Semantic Scholar is a new AI program which has been trained to read through scientific papers and provide a unique one sentence summary of the paper’s content. The AI has been trained with a large data set focused on learning how to process natural language and summarise it. The ultimate idea is to use technology to help learning and synthesis happen more quickly, especially for figure such as politicians.
- MIT Tech Review
- 2020
-
- 5 min
- MIT Tech Review
- 2020
AI Summarisation
The Semantic Scholar is a new AI program which has been trained to read through scientific papers and provide a unique one sentence summary of the paper’s content. The AI has been trained with a large data set focused on learning how to process natural language and summarise it. The ultimate idea is to use technology to help learning and synthesis happen more quickly, especially for figure such as politicians.
How might this technology cause people to become lazy readers? How does this technology, like many other digital technologies, shorten attention spans? How can it be ensured that algorithms like this do not leave out critical information?