But they can also cause harm. Since algorithms are, by nature, designed to work without human intervention (in fact, that's their entire purpose), that also means if there's a problem with the algorithm, it might not be spotted until multiple negative outcomes have already occurred.
Though there is evidence that algorithms - even when they show bias - are far superior to human decision-making, people often feel more comfortable knowing that a person, and not a computer, made a decision. For instance, I briefly worked on research for a new masters program. Since we had so many qualified candidates, many admission decisions were made by lottery. But because of past negative responses from the people who weren't chosen for other programs, this information was not widely known. In my current job, where scoring of exams is done by computer, we still do some quality control by hand, to make sure nothing went wrong - and this is viewed as essential by examinees and accreditors, especially in cases of high stakes testing. So it seems very likely that people might perceive decisions made by algorithms as unfair, and decisions made by people as more fair, even when they're not.
At the same time, there may be bias in variables measured and selected for algorithms, because at some point, that decision was made by a person. And algorithms that perpetuate discrimination can result in an endless feedback loop or a sort of self-fulfilling prophecy.
This may be the reason that New York City recently passed a bill to examine algorithmic bias in city government agencies:
The bill, which was signed by Mayor Bill de Blasio last week, will assign a task force to examine the way that New York City government agencies use algorithms to aid the judicial process.What are your thoughts on this issue? Should we always follow the algorithm's data driven decisions, even when those decisions are biased against a certain group? Or should we allow human intervention, even when that risks introducing more bias?
According to ProPublica, council member James Vacca sponsored the bill as a response to ProPublica's 2016 account of racially-biased algorithms in the American criminal justice system. The investigative story revealed systemic digital bias within judicial risk assessment programs that favored the release of white inmates on the grounds of future good behavior over the release of black defendants.
Algorithmic source code is typically private, but issues of bias have called for increased transparency. The ACLU has spoken out on behalf of the bill passing, and it described access to institutionalized algorithmic source code as a fundamental step in ensuring fairness within the criminal justice system.
No comments:
Post a Comment