Skip to main content

Faced with growing public and legal pressure, some businesses are taking steps to utilize AI in a more socially responsible way. They refer to these efforts as “responsible AI management” (RAIM). This report conveys the results of a survey-based study conducted in early 2023 of RAIM practices at businesses that develop and use AI. 

The rush to deploy powerful new generative AI technologies, such as ChatGPT, has raised alarms about potential harm and misuse. The law’s glacial response to such threats has prompted demands that the companies developing these technologies implement AI “ethically.”

But what, exactly, does that mean?

The Final Report in the Business Data Ethics project examines: The threats that corporate use of advanced analytics creates for individuals and the broader society (Part III); What “data ethics” means to the companies that practice it (Part IV); Why companies pursue data ethics when the law does not require them to do so (Part V); The substantive principles that companies use to draw the line between ethical and unethical uses of advanced analytics (Part VI); The management processes (Part VII) and technologies (Part VIII) that companies use to achieve these substantive goals; and Corporate projects that use advanced analytics for the social good (Part IX).

This first paper in the Corporate Data Ethics series shares observations and quotes derived from semi-structured interviews with practitioners managing the ethics of big data analytics applications.

A comprehensive analysis of Ohio’s innovative Data Protection Act, a law that seeks to incentivize better cybersecurity among companies doing business in Ohio. The Moritz College of Law’s Program on Data and Governance produced this report in collaboration with the Cleveland-Marshall University Center for Cybersecurity and Privacy Protection.

This report was prepared for the International Association of Privacy Professions.

Related Publications authored or co-authored by Affiliated Faculty

Peter Shane