Garbage In, Garbage Out.

by w3woody

Secret Algorithms Threaten the Rule of Law

These assessments are an extension of a trend toward actuarial prediction instruments for recidivism risk. They may seem scientific, an injection of computational rationality into a criminal justice system riddled with discrimination and inefficiency. However, they are troubling for several reasons: many are secretly computed; they deny due process and intelligible explanations to defendants; and they promote a crabbed and inhumane vision of the role of punishment in society.

The biggest problem I have with these algorithms is the penchant to hard-wire the racism of the past.

If you use prior human judgement on previous cases in order to inform your algorithm, and those prior judgements were inherently racist (sentencing black people to harsher sentences than white people, sentences in jails where we know only encourage further criminal activity), then the algorithms become inherently racist.

Even if those algorithms do not take into account race, they can often formulate patterns based on a large enough data set in order to predict race as a side effect of predicting recidivism rates.


I was concerned with a proposal that AI engines be able to provide justification for their decisions–which is nearly impossible with trained neural networks. (And I suspect some of the companies providing algorithms to compute recidivism risk essentially use neural networks trained with prior case data to come up with their scores.)

But in situations like this, where they are being used to calculate sentencing in criminal cases, such justification seems mandatory to me.

It may foreclose training neural networks with prior case data to predict future results–but isn’t that what we want in such a situation? To break from our racist sentencing past and provide a more uniform judgement of criminals?