Fairness and Non-discrimination (56)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 27 min
- Cornell Tech
- 2019
Podcast about worker quantification in factors such as hiring, productivity and more. Dives into the discussion on why we should attempt a fair making of algorithms. Warns specifically about how algorithms can find “proxy variables” to approximate for cultural fits like race or gender even when the algorithms is supposedly controlled for these factors.
- Cornell Tech
- 2019
Quantifying Workers
Podcast about worker quantification in factors such as hiring, productivity and more. Dives into the discussion on why we should attempt a fair making of algorithms. Warns specifically about how algorithms can find “proxy variables” to approximate for cultural fits like race or gender even when the algorithms is supposedly controlled for these factors.
What are the dangers of having an algorithm involved in the hiring process? Is efficiency worth the cost in this scenario? Can humans ever be placed in a binary context?
-
- 28 min
- Cornell Tech
- 2019
Pre-trial risk assessment is part of an attempted answer to mass incarceration. Data sometimes answers a different question than the ones we’re trying to answer (data based on riskiness before incarceration, not how dangerous they are later). Essentially, technologies and algorithms which end up in contexts of social power differentials can often be abused to further cause injustice against people accused of a crime, for example. Numbers are not neutral and can even be a “moral anesthetic,” especially if the sampled data has confounding variables that collectors ignore. Engineers designing technology do not always envisage ethical questions when making decisions that ought to be political.
- Cornell Tech
- 2019
Algorithms in the Courtroom
Pre-trial risk assessment is part of an attempted answer to mass incarceration. Data sometimes answers a different question than the ones we’re trying to answer (data based on riskiness before incarceration, not how dangerous they are later). Essentially, technologies and algorithms which end up in contexts of social power differentials can often be abused to further cause injustice against people accused of a crime, for example. Numbers are not neutral and can even be a “moral anesthetic,” especially if the sampled data has confounding variables that collectors ignore. Engineers designing technology do not always envisage ethical questions when making decisions that ought to be political.
Would you rely on a risk-assessment algorithm to make life-changing decisions for another human? How can the transparency culture which Robinson describes be created? How can we make sure that political decisions stay political, and don’t end up being ultimately answered by engineers? Can “fairness” be defined by a machine?
-
- 27 min
- Cornell Tech
- 2019
Solon Barocas discusses his relatively new course on ethics in data science, following a larger trend of developing ethical sensibility in this field. He shares ideas of spreading out lessons across courses, promoting dialogue, and making sure we are really analyzing problems while learning to stand up for the right thing. Offers a case study of technological ethical sensibilities through questions raised by predictive policing algorithms.
- Cornell Tech
- 2019
Teaching Ethics in Data Science
Solon Barocas discusses his relatively new course on ethics in data science, following a larger trend of developing ethical sensibility in this field. He shares ideas of spreading out lessons across courses, promoting dialogue, and making sure we are really analyzing problems while learning to stand up for the right thing. Offers a case study of technological ethical sensibilities through questions raised by predictive policing algorithms.
Why is it important to implement ethical sensibility in data science? What could happen if we do not?
-
- 7 min
- TED
- 2017
Predictive policing software such as PredPol may claim to be objective through mathematical, “colorblind” analyses of geographical crime areas, yet this supposed objectivity is not free of human bias and is in fact used as a justification for the further targeting of oppressed groups, such as poor communities or racial and ethnic minorities. Further, the balance between fairness and efficacy in the justice system must be considered, since algorithms tend more toward the latter than the former.
- TED
- 2017
-
- 7 min
- TED
- 2017
Justice in the Age of Big Data
Predictive policing software such as PredPol may claim to be objective through mathematical, “colorblind” analyses of geographical crime areas, yet this supposed objectivity is not free of human bias and is in fact used as a justification for the further targeting of oppressed groups, such as poor communities or racial and ethnic minorities. Further, the balance between fairness and efficacy in the justice system must be considered, since algorithms tend more toward the latter than the former.
Should we leave policing to algorithms? Can any “perfect” algorithm for policing be created? How can police departments and software companies be held accountable for masquerading bias as the objectivity of an algorithm?
-
- 10 min
- Survival of the Best Fit
- 2018
Explores hiring bias of AI by playing a game in which you are the hiring manager.
- Survival of the Best Fit
- 2018
-
- 10 min
- Survival of the Best Fit
- 2018
Survival of the Best Fit
Explores hiring bias of AI by playing a game in which you are the hiring manager.
How does it feel to be in the situation in which you have inserted the bias into the algorithm? What steps do you feel must be taken to ensure algorithms are trained in a less hasty manner?
-
- 5 min
- MIT Technology Review
- 2019
In the case of the New Orleans Police Department, along with other cities, data used to train predictive crime algorithms was inconsistent and “dirty” to begin with, making the results disproportionately targeted toward disadvantaged communities.
- MIT Technology Review
- 2019
-
- 5 min
- MIT Technology Review
- 2019
Police across the US are training crime-predicting AIs on falsified data
In the case of the New Orleans Police Department, along with other cities, data used to train predictive crime algorithms was inconsistent and “dirty” to begin with, making the results disproportionately targeted toward disadvantaged communities.
If the data which we train algorithms with is inherently biased, then can we truly ever get a “fair” algorithm? Can AI programs ever solve or remove human bias? What might happen if machines make important criminal justice decisions, such as sentence lengths?