In New York City, more than 30 different automated decision systems help government agencies analyze DNA, assess inmates' risk of recidivism, determine where police should look for crimes and more.
The rise of the machine extends into the civil justice system as well: According to the AI Now Institute, a research group focused on the social implications of artificial intelligence, New York also relies on automation to evaluate potential child neglect, detect the abuse of public benefits and determine Medicaid eligibility, among other tasks.
As algorithms like those used in the Big Apple become increasingly prevalent across the country, journalists, researchers and advocacy groups have raised concerns that the supposedly objective calculations might share some of the same biases as the subjective humans who created them.
The problem is investigating those concerns. While lawsuits seeking access to various decision-making technologies have allowed a few individuals to peer into the so-called black box that connected them to crimes or slashed their benefit checks, most systems remain opaque to the general population.
According to a forthcoming Fordham Law Review article by Drexel University law professor Hannah Bloch-Wehba, the uneven success and application of lawsuits demonstrates that the best way to ensure transparency is legislation, not litigation.
Her paper, "Access to Algorithms," calls on transparency advocates to consider not just affected individuals but rather "the affected public."
"The people who are directly affected by these kinds of tools — whether it's risk assessment or Medicaid or predictive policing — are not always going to be in a position to seek access to information about how they function," she told Law360, citing barriers like legal representation and time constraints.
"Even if individuals do decide to challenge, the tools themselves are being used in ways that affect broad populations of people," Bloch-Wehba added.
She speaks with firsthand knowledge, having represented ProPublica in its efforts to access the source code for a New York City tool used to analyze DNA evidence from mixed samples. Her efforts, in the case of a man who ultimately pled guilty to the unlawful possession of handguns, were unsuccessful, but a judge did let the defendant's legal expert review the code.
After his review, the defendant's expert said the correctness of the software "should be seriously questioned." Prosecutors withdrew the DNA evidence against the defendant days before hearings over its admissibility were set to begin. The underlying software was phased out months later, and at least one conviction that relied on its findings has since been overturned.
According to Bloch-Wehba, that case was one of the rare ones in which the underlying technology was city property — increasingly, governments are sourcing algorithmic decision systems from outside vendors, many of whom require nondisclosure agreements and consider their systems trade secrets.
"It basically puts the government to an impossible choice," she told Law360. "They can't reveal an algorithm they already, through contract, promised to keep secret."
One such example is the Public Safety Assessment, developed by the Laura and John Arnold Foundation to help with pretrial decision-making. In at least two jurisdictions, Utah and Iowa, the foundation has required nondisclosure language in its contracts.
In 2018, however, the foundation — which was recently restructured into an LLC called Arnold Ventures — unveiled a new website that lists factors used to make decisions, as well as a scoring metric explaining how those factors are weighted.
"The PSA has always been a free instrument, and the details of the algorithm are available for viewing through a variety of public platforms," said Arnold Ventures spokesperson David Hebert.
Another privately developed algorithm is Northpointe Institute for Public Management's COMPAS, a proprietary risk assessment tool used in sentencing proceedings to determine a defendant's risk of recidivism.
Eric Loomis, a Wisconsin defendant who was sentenced by the tool, raised a due process legal challenge on the grounds that its proprietary nature made it impossible to test for bias.
In July 2017, however, the Wisconsin Supreme Court ruled that COMPAS could be used as one factor among many used in sentencing, and the tool remains in use across the country despite multiple allegations of race discrimination in its outcomes.
To refute the criticisms, Northpointe released its own technical review of the tool's statistical methodology, finding that there is no racial bias and that studies suggesting otherwise were in error.
But the now well-known case helped spur the movement toward examining algorithmic transparency.
In 2018, New York became the first city in the country to convene a task force aimed at establishing procedures that would allow impacted individuals to request information on algorithmic decisions. The group, set to hold its first public forum on April 30, also plans to develop a process for publicly disclosing information about agency systems.
Brittny Saunders, co-chair of the Automated Decisions System Task Force, told Law360 that its members are thinking through questions about nondisclosure agreements with vendors, seeking to balance transparency with privacy and security concerns.
"The equity questions are huge," she said. "The privacy questions are huge. The transparency questions are huge, and we're really invested in getting it right and being thoughtful."
Other task force members, however, have expressed fears that the city's focus on privacy and security is actually preventing progress toward transparency.
In a public letter to City Council on April 4, Julia Stoyanovich, a New York University data science professor, and Solon Barocas, a Cornell University information science professor, lambasted the city's failure to identifty any automated decision systems currently in use.
"The city has cited concerns with privacy and security in response to our requests, but these cannot be used as blanket reasons to stand in the way of government transparency," they wrote.
Speaking for the city, Saunders told Law360 that determining the scope of the task force's purview has required "a lot of time thinking through this foundational question of what is and isn't an automated decision system."
Spurred by New York's pioneering effort, legislators in Washington and Massachusetts are considering similar laws to establish guidelines for the procurement and use of automated decision systems. No such legislation has been proposed at the federal level, but 2018's First Step Act required the development of a risk and needs assessment system to help determine the path out of jail for inmates.
Just four months after the bill passed, the U.S. Department of Justice has already come under fire for considering preexisting algorithms as substitutes for what was supposed to be a new system.
Under the First Step Act, the DOJ was also directed to appoint a nonpartisan nonprofit with expertise in the field to oversee the new system's implementation: it selected the Hudson Institute, which describes itself as nonpartisan but stands accused by the ACLU and the Leadership Conference On Civil and Human Rights of being a "politically conservative" think tank.
According to Sakira Cook of The Leadership Conference, the 2018 bill's requirement of an outside, nonpartisan group was a crucial factor in garnering her organization's support.
"It appears as if the DOJ wants to set up an independent review committee that will validate what they've already internally decided," she said.
A representative for the Hudson Institute declined to comment on questions about the expected new algorithm. A representative for the DOJ did not return a request for comment.
Have a story idea for Access to Justice? Reach us at accesstojustice@law360.com.
--Editing by Katherine Rautenberg.
Try our Advanced Search for more refined results
When Algorithms Control Justice, Who Can Check The Math?
By RJ Vogt | April 21, 2019, 8:02 PM EDT