Press Release
Framework Developed by Johns Hopkins APL, Think Tank Assesses AI Impact
The Johns Hopkins Applied Physics Laboratory (APL), in collaboration with the nonprofit, nonpartisan (SCSP), published a framework to guide regulators as they determine whether artificial intelligence (AI) development and uses are highly consequential to society and require their attention.
A “” recognizes that AI can help improve health, education and productivity, as well as provide the data required to solve ongoing global problems. However, AI also has the potential to spread disinformation, promote discrimination and be deployed in cyberattacks.
“In the course of conducting this analysis, we reviewed and leveraged existing domestic and international frameworks that apply risk-based approaches to classify and advance trustworthy AI,” explained Stephanie Tolbert, a national security analyst at APL and the Laboratory’s study lead. “It became clear that this kind of framework is needed by government and private sector entities who are trying to anticipate outcomes of AI-enabled systems, but who have very little help to undertake a meaningful evaluation.
“Our hope is that the framework leads to a registry of use cases that can inform industry and be shared with the public to highlight how cases are evaluated.”
The APL and SCSP team crafted the framework using feedback from academics, policy experts, regulators, and industry and civil society leaders. It focuses on a set of 10 corresponding harms and benefits categories. Users identify and assess each AI harm and benefit, and evaluate their magnitudes by weighing several factors, including scope, probability, frequency and duration. Using these assessments, a framework user can determine whether the AI system is “highly consequential,” and therefore requiring regulatory action.
The framework provides users with a standardized but flexible and dynamic approach to identifying whether AI systems will be beneficial or harmful. It does not suggest regulatory action. Instead, it’s a starting point, providing policymakers with an adaptable guide for deliberations over appropriate guardrails on AI use.
“We cannot, nor should we, regulate every AI use case,” said Rama G. Elluru, SCSP senior director for Society and Intellectual Property. “We need to balance regulation with innovation and look at the entire context of societal impacts. That requires tools that help identify AI uses that merit regulatory attention. Those regulatory efforts could include incentivizing AI use, mitigating harms or even banning AI use. While this framework does not speak to the regulatory action that should be taken for AI use cases, it helps regulators identify uses or classes of uses that require their focus.”
Researching the framework was a collaborative effort, said Tolbert, who noted contributions from experts across APL, including data scientist Tyler Ashoff, cognitive engineer John Gersh, Erin Hahn, the managing executive in APL’s National Security Analysis Department, and national security analysts Mark Hodgins, Michael Moskowitz and Rodney Yerger.
SCSP’s mandate is to strengthen America’s long-term competitiveness as AI and other emerging technologies reshape the nation’s national security, economy and society.