In July, the Pennsylvania Sentencing Commission will implement a new risk assessment tool to guide judges in making sentencing determinations. It will rate the likelihood of someone reoffending using factors like their criminal history, gender and age.
And it will be right approximately 66% of the time.
According to Mark Bergstrom, the commission's executive director, that's a pretty good rating for criminal justice risk assessment algorithms, which generally achieve accuracy rates between 60 and 70%. For reference: flipping a coin to predict the future would generate a 50% rate.
Speaking at a recent panel of risk assessment experts, Bergstrom noted that risk assessment tools, while prone to error, can still be a major improvement over a status quo in which individual judges' biases already create racial disparities and uneven sentencing.
But his remarks were met with incredulity by Alicia L Carriquiry, a statistics professor at Iowa State University.
"In medical literature or in any other area where you have to prescribe something for human use," she said, "Sixty-six percent would be insane. There's no drug that FDA would approve for use with that rating. I understand it's the best we can do at this moment, but that's just not good enough."
Her point underscored widespread concerns about the rising use of risk algorithms in determining whether, and for how long, someone should be incarcerated. A range of groups, including the American Civil Liberties Union the American Bail Coalition, have opposed their use as alternatives to cash bail, and similar qualms have been raised about their use in sentencing.
Despite the controversy swirling around these tools, representatives from Ohio and Indiana joined Bergstrom at the American Bar Association's midyear conference in Austin, Texas, to defend their own states' decisions to use them in various criminal justice settings.
One of the primary reasons for adopting a risk assessment tool, Bergstrom pointed out, is the fact that the traditional standard of relying on judicial deference has led to vast inequities in the criminal justice system.
"Young black males often receive harsher sentences, all other things being equal," he said. "So we know that within the system itself, there are problems ... we are trying to do something to improve on the status quo."
Pennsylvania's risk assessment tool, dubbed the "RAT" by critics, attempts to reduce racial disparities with a unique protection against the errors that will inevitably come from the algorithm's calculations.
Instead of recommending more or less incarceration, it will simply flag low- or high-risk cases as ones where judges should obtain more information about the defendant before sentencing.
The result, Bergstrom added, will hopefully be a reduction in unnecessary incarceration for low-risk people and an increase in rehabilitative services for people deemed likely to reoffend.
"It's really easy to conflate risk with punishment," he said, "Our takeaway is no, we should be conflating risk with information. We should be identifying cases that are high and low risk and try to encourage more information in those cases."
But Carriquiry did some quick calculations to demonstrate the inherent problems with classifying people as high risk or low risk depending on factors like criminal justice history and juvenile records — both of which can serve as statistical proxies for race.
Using Bergstrom's own data, she calculated that white offenders were incorrectly classified as high risk approximately 12% of the time; black offenders, on the other hand, were incorrectly classified as high risk approximately 15% of the time. For every four white people incorrectly characterized as high risk, five black people will be incorrectly characterized as high risk too.
"That may seem like a small amount, but it's a lot of people," she added.
Judge Christina Klineman, an Indiana county judge, said her state's recent adoption of pretrial risk assessment has sought to mitigate disparities by preserving judicial deference. It also steers clear of aiming to gauge someone's likelihood of rearrest, focusing instead on the less charged question of whether someone will return to court.
The judge noted that "flake risk" — people forgetting to show up, or not being able to — is far more common than the "flight risk" of someone evading justice.
"We now have a more robust pretrial services department, and it's simple stuff," Klineman said. "They just need reminder calls, sometimes in person."
Judge Guy Reece, a former Ohio state county judge, said risk assessments were also helpful for his county's probation department, which was able to cut case loads by half by identifying high-risk cases to focus on, and by reducing parole requirements on particularly low-risk cases.
As a result, he said parole officers are able to spend much more time with each of their parolees.
"It used to be, 'go pee in a bottle,' and then you'd look for violations and proceed from there," he said. "Now it's more of an opportunity to sit, assess and ensure that those requiring urgent programming receive what they need so they don't reoffend."
Despite the success of some risk assessments, support for their implementation may be flagging.
Even as Pennsylvania prepares to launch its new sentencing tool in July, the Ohio Supreme Court decided last month to ditch language in proposed reforms that would have required all state judges to be provided with pretrial risk assessments.
And on Feb. 7, the Pretrial Justice Institute — a longtime supporter of pretrial tools — reversed its position in a statement that decried inherent racial bias in their design.
As Carriquiry noted, pursuing a goal of less racial bias by using a system that is known to perpetuate racial bias seems nonsensical.
"The new systems are reinforcing the old biases," she said.
--Editing by Adam LoBelia.
Have a story idea for Access to Justice? Reach us at accesstojustice@law360.com.
Try our Advanced Search for more refined results
Coin Toss: Are Risk-Measuring Tools Accurate Enough?
By RJ Vogt | February 23, 2020, 8:02 PM EST