Bias in the tech workplace or technology relating to the betterment or destruction of race relations.
Technology and Race (21)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 5 min
- MIT Technology Review
- 2019
In the case of the New Orleans Police Department, along with other cities, data used to train predictive crime algorithms was inconsistent and “dirty” to begin with, making the results disproportionately targeted toward disadvantaged communities.
- MIT Technology Review
- 2019
-
- 5 min
- MIT Technology Review
- 2019
Police across the US are training crime-predicting AIs on falsified data
In the case of the New Orleans Police Department, along with other cities, data used to train predictive crime algorithms was inconsistent and “dirty” to begin with, making the results disproportionately targeted toward disadvantaged communities.
If the data which we train algorithms with is inherently biased, then can we truly ever get a “fair” algorithm? Can AI programs ever solve or remove human bias? What might happen if machines make important criminal justice decisions, such as sentence lengths?
-
- 10 min
- Survival of the Best Fit
- 2018
Explores hiring bias of AI by playing a game in which you are the hiring manager.
- Survival of the Best Fit
- 2018
-
- 10 min
- Survival of the Best Fit
- 2018
Survival of the Best Fit
Explores hiring bias of AI by playing a game in which you are the hiring manager.
How does it feel to be in the situation in which you have inserted the bias into the algorithm? What steps do you feel must be taken to ensure algorithms are trained in a less hasty manner?
-
- 7 min
- Slate
- 2021
A new law passed unanimously in Maine heavily restricts the contexts in which facial recognition technology can be deployed, putting significant guardrails around how it is used by law enforcement. Also, it allows citizens to sue if they believe the technology has been misused. This is a unique step in a time when several levels of government, all the way up to the federal government, are less likely to attach strict rules to the use of facial recognition technology, despite the clear bias that is seen in the wake of its use.
- Slate
- 2021
-
- 7 min
- Slate
- 2021
Maine Now Has the Toughest Facial Recognition Restrictions in the U.S.
A new law passed unanimously in Maine heavily restricts the contexts in which facial recognition technology can be deployed, putting significant guardrails around how it is used by law enforcement. Also, it allows citizens to sue if they believe the technology has been misused. This is a unique step in a time when several levels of government, all the way up to the federal government, are less likely to attach strict rules to the use of facial recognition technology, despite the clear bias that is seen in the wake of its use.
How can tech companies do even more to lobby for stricter facial recognition regulation? Is a moratorium on facial recognition use by all levels of government the best plan? Why or why not? Does creating “more diverse datasets” truly solve all the problems of bias with the technology?