Skip to main content

The use of risk assessment (RA) tools has become a key component of the criminal justice system in the United States. Much of the existing scholarship concentrates on normative and technical aspects of RAs, or on recommendations for their improvement. However, there has been little empirical work on how courts and other criminal justice actors perceive and utilize these tools on the ground. In this study, we provide an in-depth picture of how the Courts of Common Pleas think about and use algorithmic risk assessments. Primarily, we focus on the use of risk assessment tools in Ohio Courts of Common Pleas and compare Ohio practices with best practices highlighted in the literature. To investigate, we surveyed Ohio Courts of Common Pleas judges, probation officers, and court administrators regarding their views on and use of algorithmic risk assessment tools. We further conducted interviews with judges and a diverse array of stakeholders that included victim’s rights, civil liberties, and civil rights groups, as well as public defenders and county prosecutors. The findings show that judges largely see risk assessment tools as essential to their decision-making, with most trusting the tools to improve risk-related judgments. While judges believe these tools are no more biased than humans, about 60% still consider their own judgment superior, even though they acknowledge the tools are generally less biased than human decision-makers. Our findings on Ohio’s use of risk assessment tools are mixed. Judges agree the tools should guide, not dictate, decisions, aligning with best practices. However, many lack sufficient training—a crucial recommendation. We conclude with broad recommendations for enhancing the use of risk assessment tools in Ohio’s judicial system. 

Faced with growing public and legal pressure, some businesses are taking steps to utilize AI in a more socially responsible way. They refer to these efforts as “responsible AI management” (RAIM). This report conveys the results of a survey-based study conducted in early 2023 of RAIM practices at businesses that develop and use AI. 

The rush to deploy powerful new generative AI technologies, such as ChatGPT, has raised alarms about potential harm and misuse. The law’s glacial response to such threats has prompted demands that the companies developing these technologies implement AI “ethically.”

But what, exactly, does that mean?

The Final Report in the Business Data Ethics project examines: The threats that corporate use of advanced analytics creates for individuals and the broader society (Part III); What “data ethics” means to the companies that practice it (Part IV); Why companies pursue data ethics when the law does not require them to do so (Part V); The substantive principles that companies use to draw the line between ethical and unethical uses of advanced analytics (Part VI); The management processes (Part VII) and technologies (Part VIII) that companies use to achieve these substantive goals; and Corporate projects that use advanced analytics for the social good (Part IX).

This first paper in the Corporate Data Ethics series shares observations and quotes derived from semi-structured interviews with practitioners managing the ethics of big data analytics applications.

A comprehensive analysis of Ohio’s innovative Data Protection Act, a law that seeks to incentivize better cybersecurity among companies doing business in Ohio. The Moritz College of Law’s Program on Data and Governance produced this report in collaboration with the Cleveland-Marshall University Center for Cybersecurity and Privacy Protection.

This report was prepared for the International Association of Privacy Professions.

Related Publications authored or co-authored by Affiliated Faculty

Peter Shane