Themes (326)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 27 min
- Cornell Tech
- 2019
Solon Barocas discusses his relatively new course on ethics in data science, following a larger trend of developing ethical sensibility in this field. He shares ideas of spreading out lessons across courses, promoting dialogue, and making sure we are really analyzing problems while learning to stand up for the right thing. Offers a case study of technological ethical sensibilities through questions raised by predictive policing algorithms.
- Cornell Tech
- 2019
Teaching Ethics in Data Science
Solon Barocas discusses his relatively new course on ethics in data science, following a larger trend of developing ethical sensibility in this field. He shares ideas of spreading out lessons across courses, promoting dialogue, and making sure we are really analyzing problems while learning to stand up for the right thing. Offers a case study of technological ethical sensibilities through questions raised by predictive policing algorithms.
Why is it important to implement ethical sensibility in data science? What could happen if we do not?
-
- 28 min
- Cornell Tech
- 2019
Pre-trial risk assessment is part of an attempted answer to mass incarceration. Data sometimes answers a different question than the ones we’re trying to answer (data based on riskiness before incarceration, not how dangerous they are later). Essentially, technologies and algorithms which end up in contexts of social power differentials can often be abused to further cause injustice against people accused of a crime, for example. Numbers are not neutral and can even be a “moral anesthetic,” especially if the sampled data has confounding variables that collectors ignore. Engineers designing technology do not always envisage ethical questions when making decisions that ought to be political.
- Cornell Tech
- 2019
Algorithms in the Courtroom
Pre-trial risk assessment is part of an attempted answer to mass incarceration. Data sometimes answers a different question than the ones we’re trying to answer (data based on riskiness before incarceration, not how dangerous they are later). Essentially, technologies and algorithms which end up in contexts of social power differentials can often be abused to further cause injustice against people accused of a crime, for example. Numbers are not neutral and can even be a “moral anesthetic,” especially if the sampled data has confounding variables that collectors ignore. Engineers designing technology do not always envisage ethical questions when making decisions that ought to be political.
Would you rely on a risk-assessment algorithm to make life-changing decisions for another human? How can the transparency culture which Robinson describes be created? How can we make sure that political decisions stay political, and don’t end up being ultimately answered by engineers? Can “fairness” be defined by a machine?
-
- 10 min
- MEL Beta
- 2019
The continued existence of pirating websites such as The Pirate Bay demonstrates how digital technologies can be used against institutions such as copyright, and further designates the idea of a completely free and open internet.
- MEL Beta
- 2019
-
- 10 min
- MEL Beta
- 2019
After 15 Years, The Pirate Bay Still Can’t Be Killed
The continued existence of pirating websites such as The Pirate Bay demonstrates how digital technologies can be used against institutions such as copyright, and further designates the idea of a completely free and open internet.
Why are communities online so hard to shut down? Can the internet ever be entirely free and open, as the founders of TPB discuss? What would be the consequences of this? Is the digital world decreasing the value of art forms such as recorded songs or films?
-
- 10 min
- The New Yorker
- 2019
Great breakdown of the concerns that come with automating the world without understanding why it works. Provides the principal concerns with the “hidden layer” of artificial neural networks, and how the lack of human understanding of some AI decision making makes these machines susceptible to manipulation.
- The New Yorker
- 2019
-
- 10 min
- The New Yorker
- 2019
The Hidden Costs of Automated Thinking
Great breakdown of the concerns that come with automating the world without understanding why it works. Provides the principal concerns with the “hidden layer” of artificial neural networks, and how the lack of human understanding of some AI decision making makes these machines susceptible to manipulation.
Should we still use technology that we do not have a full understanding of? Might machines play a role in the demise of expertise? How can companies and institutions be held accountable for “lifting the curtain” behind their algorithms?
-
- 7 min
- Mad Scientist Laboratory
- 2018
The combination of the profit motive for tech companies and the vague language of non-binding ehtical agreements for coders means that there must be a higher regulation for ethical deployment and use of technology. Argues that there must be clear demarcations between what is considered real and human versus fake and virtual. Digital technologies should be regulated in a manner similar to other technologies, such as guns, cars, or nuclear weapons.
- Mad Scientist Laboratory
- 2018
-
- 7 min
- Mad Scientist Laboratory
- 2018
Man Machine Rules
The combination of the profit motive for tech companies and the vague language of non-binding ehtical agreements for coders means that there must be a higher regulation for ethical deployment and use of technology. Argues that there must be clear demarcations between what is considered real and human versus fake and virtual. Digital technologies should be regulated in a manner similar to other technologies, such as guns, cars, or nuclear weapons.
How do we ensure the ethical use of AI by digital tech giants? Should there be an equivalent of the hippocratic oath for development of digital technology? How would you imagine something like this being put in place? Are the man-machine rules laid out at the end of the article realistic?
-
- 4 min
- Wall Street Journal
- 2019
The automatization of technological repair and maintenance through AI allows systems to monitor themselves and correct themselves more quickly than a human worker could.
- Wall Street Journal
- 2019
-
- 4 min
- Wall Street Journal
- 2019
AI Powers “Self-Healing” Technology
The automatization of technological repair and maintenance through AI allows systems to monitor themselves and correct themselves more quickly than a human worker could.
What are the potential dangers of having “self-healing” computers by taking the human element out? How might using algorithms to analyse networks be better or worse than human oversight? What effect does repair automatisation have on the previous employees?