AI (124)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 27 min
- Cornell Tech
- 2019
Solon Barocas discusses his relatively new course on ethics in data science, following a larger trend of developing ethical sensibility in this field. He shares ideas of spreading out lessons across courses, promoting dialogue, and making sure we are really analyzing problems while learning to stand up for the right thing. Offers a case study of technological ethical sensibilities through questions raised by predictive policing algorithms.
- Cornell Tech
- 2019
Teaching Ethics in Data Science
Solon Barocas discusses his relatively new course on ethics in data science, following a larger trend of developing ethical sensibility in this field. He shares ideas of spreading out lessons across courses, promoting dialogue, and making sure we are really analyzing problems while learning to stand up for the right thing. Offers a case study of technological ethical sensibilities through questions raised by predictive policing algorithms.
Why is it important to implement ethical sensibility in data science? What could happen if we do not?
-
- 28 min
- Cornell Tech
- 2019
Pre-trial risk assessment is part of an attempted answer to mass incarceration. Data sometimes answers a different question than the ones we’re trying to answer (data based on riskiness before incarceration, not how dangerous they are later). Essentially, technologies and algorithms which end up in contexts of social power differentials can often be abused to further cause injustice against people accused of a crime, for example. Numbers are not neutral and can even be a “moral anesthetic,” especially if the sampled data has confounding variables that collectors ignore. Engineers designing technology do not always envisage ethical questions when making decisions that ought to be political.
- Cornell Tech
- 2019
Algorithms in the Courtroom
Pre-trial risk assessment is part of an attempted answer to mass incarceration. Data sometimes answers a different question than the ones we’re trying to answer (data based on riskiness before incarceration, not how dangerous they are later). Essentially, technologies and algorithms which end up in contexts of social power differentials can often be abused to further cause injustice against people accused of a crime, for example. Numbers are not neutral and can even be a “moral anesthetic,” especially if the sampled data has confounding variables that collectors ignore. Engineers designing technology do not always envisage ethical questions when making decisions that ought to be political.
Would you rely on a risk-assessment algorithm to make life-changing decisions for another human? How can the transparency culture which Robinson describes be created? How can we make sure that political decisions stay political, and don’t end up being ultimately answered by engineers? Can “fairness” be defined by a machine?
-
- 10 min
- The New Yorker
- 2019
Great breakdown of the concerns that come with automating the world without understanding why it works. Provides the principal concerns with the “hidden layer” of artificial neural networks, and how the lack of human understanding of some AI decision making makes these machines susceptible to manipulation.
- The New Yorker
- 2019
-
- 10 min
- The New Yorker
- 2019
The Hidden Costs of Automated Thinking
Great breakdown of the concerns that come with automating the world without understanding why it works. Provides the principal concerns with the “hidden layer” of artificial neural networks, and how the lack of human understanding of some AI decision making makes these machines susceptible to manipulation.
Should we still use technology that we do not have a full understanding of? Might machines play a role in the demise of expertise? How can companies and institutions be held accountable for “lifting the curtain” behind their algorithms?
-
- 7 min
- Mad Scientist Laboratory
- 2018
The combination of the profit motive for tech companies and the vague language of non-binding ehtical agreements for coders means that there must be a higher regulation for ethical deployment and use of technology. Argues that there must be clear demarcations between what is considered real and human versus fake and virtual. Digital technologies should be regulated in a manner similar to other technologies, such as guns, cars, or nuclear weapons.
- Mad Scientist Laboratory
- 2018
-
- 7 min
- Mad Scientist Laboratory
- 2018
Man Machine Rules
The combination of the profit motive for tech companies and the vague language of non-binding ehtical agreements for coders means that there must be a higher regulation for ethical deployment and use of technology. Argues that there must be clear demarcations between what is considered real and human versus fake and virtual. Digital technologies should be regulated in a manner similar to other technologies, such as guns, cars, or nuclear weapons.
How do we ensure the ethical use of AI by digital tech giants? Should there be an equivalent of the hippocratic oath for development of digital technology? How would you imagine something like this being put in place? Are the man-machine rules laid out at the end of the article realistic?
-
- 5 min
- GIS Lounge
- 2019
GIS, a relatively new form of computational analysis, can often contain algorithms with biases based on biases present in the training data from open data sources, with this case study focusing on the tendency of power-line identification data being centered around the Western world. This problem can be improved by approaching data collection with more intentionality, either broadening the pool of collected geographic data or inputting artificial images to help the tool recognize a greater number of circumstances and thus become more accurate.
- GIS Lounge
- 2019
-
- 5 min
- GIS Lounge
- 2019
When AI Goes Wrong in Spatial Reasoning
GIS, a relatively new form of computational analysis, can often contain algorithms with biases based on biases present in the training data from open data sources, with this case study focusing on the tendency of power-line identification data being centered around the Western world. This problem can be improved by approaching data collection with more intentionality, either broadening the pool of collected geographic data or inputting artificial images to help the tool recognize a greater number of circumstances and thus become more accurate.
What happens when the source of the data itself (the dataset) is biased? Can the ideas present in this article (namely the intentionally broadening of the training data pool and inclusion of composite data) find application beyond GIS?
-
- 6 min
- n/a
- 2018
Through a series of interactions on a chat and a truth-or-dare type game, the user guesses if they are chatting with a bot or human.
- n/a
- 2018
-
- 6 min
- n/a
- 2018
Bot or Not?
Through a series of interactions on a chat and a truth-or-dare type game, the user guesses if they are chatting with a bot or human.
Are you able to tell the difference between interacting with a bot or human? How? What indicators did you rely on to make your decision?