Some researchers are using the same scientific methods designed to test new drugs to evaluate access to justice, but they have yet to overcome objections from activists who say that randomizing "treatments" like legal aid and bail is unethical. (Adobe Stock)
Some defendants appearing in court in Dane County, Wisconsin, are given a risk-assessment score to reduce bias in decisions about bail and pretrial release. Other defendants in the same court — and even before the same judge — are not.
The disparate treatment isn't because of discrimination or a lack of resources. It's because the defendants are members of two different groups in a scientific study.
Randomized controlled trials like this one designed to evaluate the risk-assessment tool are the "gold standard" for research in most social sciences and in medicine, where they are currently being used to test COVID-19 vaccines, according to researchers.
But similar studies in the field of access to justice have been few and far between, they say.
"The legal field is where the medical field was 100 years ago — we decide what works based on the opinion of experts, without really looking at the data to understand empirically: what works?" says Erika Rickard, project director of civil legal system modernization at The Pew Charitable Trusts.
It's a failing researchers like Rickard and others are working hard to change.
But some activists oppose the studies, calling it unethical to randomize assistance like legal aid and charitable bail. Researchers have also struggled to get buy-in from judges and attorneys who see randomization as inconsistent with the deliberateness of the law.
Despite that resistance, academics insist these studies are the only way to know which of the many reforms being made to the criminal justice system actually work.
"There are so many factors, especially in access to justice, that could be explaining what we see in the real world," says Christopher Griffin Jr., director of empirical and policy research at the University of Arizona James E. Rogers College of Law, "and if we want to know what current approaches and what new ideas really move the needle, a randomized controlled trial is the best we've got."
How They're Done
Randomized controlled trials, or RCTs, allow scientists to test the impact of interventions like free legal services by randomly dividing test subjects into two groups — a treatment or experimental group and a control group. The treatment group receives the intervention while the control group doesn't. Researchers then follow the test subjects over time to measure outcomes like employment and recidivism.
By randomly choosing which test subjects receive the "treatment" and which don't, RCTs can remove factors like gender, age and race that might otherwise affect how someone fares in court, isolating the impact of the intervention.
In the Dane County study, Harvard Law School's Access to Justice Lab is evaluating the efficacy of a public safety assessment tool, which gives defendants a score to inform judges' pretrial bail decisions. Defendants are randomly selected to either be given the score before their initial court appearance or not, according to Jim Greiner, the lab's faculty director.
Researchers then track defendants for two years to measure days spent incarcerated, failures to appear and new criminal activity, among other outcomes, to measure the impact those scores have on defendants, Greiner says.
The Dane County study only looks at the pretrial risk assessment scores, and not other interventions like legal aid or charitable bail, and the defendants involved were made aware they were participating in the study, according to Colleen Clark-Bernhardt, the manager of policy and practice innovation in Dane County, Wisconsin, and the project manager for the RCT.
The study, which began in 2017, won't be concluded until 2022, though randomization was finished in 2019. An interim report issued in September found that the scores did impact bail decisions and that those decisions were more consistent for the treatment group than the control group.
Studies like these are necessary to see "what evidence are we going to have to rationally decarcerate," Greiner says. "What are the costs and benefits, what evidence are we going to have to try to rehabilitate people?"
Getting Buy-In
RCTs face unique challenges in the courtroom that they often don't face in the laboratory, researchers say, one of which is convincing those in the legal system they're necessary.
It can be difficult to win over judges and lawyers used to making considered, deliberate decisions based on individual circumstances and their own training, something that can seem "inconsistent" with randomization, according to Griffin.
Studies also don't always produce the results the legal system wants, points out Rickard, who is currently working on RCTs of online dispute resolution tools in local courts. For example, judges and attorneys may not be excited about research that shows the level of legal services a litigant actually needs to successfully navigate a case.
So Griffin has worked hard to get buy-in for his study of a new model of legal aid for victims of domestic violence. The RCT he's planning for early 2021 will compare outcomes for victims who are randomly assigned to receive only information from lay advocates to victims who also get legal advice from nonlawyers certified to provide that advice, he says.
Conducting the study required getting approval from the court for the nonlawyer practice model as well as from his nonprofit partner organization. That means building personal relationships with lawyers, judges and legal services providers, and educating them about what the RCTs are, how they work and why they're important, according to Griffin.
For his own studies in the Pennsylvania state prison system, Bret Bucklen not only gets the cooperation of prison staff and inmates, but he also encourages prison staff to submit their own ideas for programs they'd like to see tested.
Bucklen, who is the director of the Bureau of Planning, Research and Statistics for the Pennsylvania Department of Corrections, has conducted several RCTs in that state's prisons, including one that found a slightly lower recidivism rate for parolees who were relocated to other areas after release than for those returned to their home communities.
"As challenging as the logistical obstacles are, still the biggest challenge is just getting past that 'Why should we do it in the first place?'" he says.
Harvard's researchers, meanwhile, "spent considerable time with judges, court commissioners, the district attorney, defense, law enforcement and other critical stakeholders prior to the study," according to Clark-Bernhardt.
Without the support of all those people, the study would have failed, Clark-Bernhardt says.
"It's a different type of work than if you take a data set and crunch the numbers," according to Griffin. "It's a lot of working with people so that we are ensuring that both the thing we're testing and the way we're testing it are keeping in mind the rights and responsibilities of everybody in the system."
Ethical Objections
Conducting these studies in the context of access to justice, though, has serious ethical issues, according to some activists.
"While we recognize their value, I think whenever [RCTs] are put in place for anything around justice-impacted research, specifically bail, you really have to consider the ethical implications of withholding bail for folks," says Tara Watford, the chief data officer for The Bail Project. "We always worry about the folks that would be in that control group, because we know the lasting negative effects that pretrial detention can have."
RCT researchers readily admit their field has been plagued by objections from those who find it unethical to randomize people away from interventions like diversion programs and charitable bail.
"That's my understanding of medical trials, that's my understanding of any other kind of human subject trials, is that as soon as you know that something will be helpful, it is unethical to continue keeping it from people," says Atara Rich-Shea, a regional organizer for the Community Justice Exchange, which houses the National Bail Fund Network.
But those objections are based on the assumption that we already know these interventions work, Bucklen points out.
"If we already knew that the program works, then it would indeed be unethical to hold people back from that program at random," he says. "But if we already knew it works, there would be no need to evaluate it in the first place."
Activists, however, argue that we don't need to study reforms like doing away with pretrial detention or better funding public defenders to know that the absence of those interventions is harmful.
"Abundant research already exists showing that pretrial incarceration causes harm to detained people and their loved ones," says Katy Naples-Mitchell, a staff attorney at the Charles Hamilton Houston Institute for Race and Justice at Harvard Law School. "Denying a control group of people pretrial release, something we know will help them, and subjecting them to something we know hurts them in the interests of research is ethically insupportable."
Bucklen emphasizes that the inmates involved in his studies always consent to participating. And randomizing people away from potentially life-saving treatments is exactly what we do in medical studies like those of COVID-19 vaccines, according to Greiner.
"The seriousness of the consequence is a reason to study rigorously, not a reason to avoid studying rigorously," he says.
And RCTs can and should be designed to alleviate these ethical concerns, according to Watford.
For starters, a study should have a large enough subject pool such that there wouldn't be enough resources to help everybody in the pool even absent the study, she says. That way, the RCT doesn't end up taking bail money or another intervention away from people who might otherwise get it.
Researchers should also ensure randomization happens before staff interact with study subjects, according to Watford, since meeting with clients about the help they need and then seeing them randomized away from that help can be difficult for subjects and staff.
Finally, rather than withholding help from a control group, researchers could compare a treatment group given the intervention now to data from a similar control group that didn't get the intervention in the past, something Watford calls "natural experiments."
But even these adjustments can't solve the RCTs' ethical issues, say other activists like Naples-Mitchell and Rich-Shea, who prefer that the money spent on studying these interventions be earmarked for funding the reforms themselves.
"Instead of funding a study to determine whether — or by how much — access to counsel at arraignment improves system functioning and outcomes for the accused," Naples-Mitchell asks, "why not just directly fund public defense?"
Whether RCTs can overcome doubts like these and catch on as a legitimate method for studying access to justice issues is a question that has yet to be answered. In addition to overcoming these objections, researchers still need to find a way to scale up interventions that have been successful in trials, Bucklen says, something he admits has been a challenge so far.
But barriers to conducting RCTs in this area, both on ethical grounds and out of resistance to the idea of randomizing the law, are beginning to come down, Griffin says, "because stakeholders and policymakers in the legal sphere want to become more evidence-based in the work that they do."
"And the more we can uncover fruitful sources for innovation and really tell ourselves things that we wouldn't know without an RCT," he adds, "the more demand there will be for this type of evaluation."
Have a story idea for Access to Justice? Reach us at accesstojustice@law360.com.
--Editing by Katherine Rautenberg.
Clarification: This article has been updated to clarify that the Dane County study is evaluating only pretrial risk assessment scores and not legal aid or charitable bail, and that participating defendants were made aware of the study.