Press Release
Johns Hopkins APL Joins National AI Safety Consortium
The Johns Hopkins Applied Physics Laboratory (APL) in Laurel, Maryland, has joined the newly formed U.S. AI Safety Institute Consortium (AISIC), launched by the U.S. Department of Commerce to unify efforts from industry, academia and government to establish standards and practices for artificial intelligence (AI) safety and assurance. The (IAA), run jointly by APL and the Johns Hopkins Whiting School of Engineering, is leading Johns Hopkins University’s involvement.
This first-ever consortium dedicated to AI safety aligns with to bolster AI safety standards and innovation protection.
“Joining AISIC aligns perfectly with our mission to ensure AI technologies are developed and deployed with the highest standards of effectiveness, safety and reliability,” said Jane Pinelis, chief AI engineer in APL’s Applied Information Sciences Branch of the Asymmetric Operations Sector. “We are excited to contribute our expertise in AI assurance and risk management to this vital national effort."
The consortium, housed under the , includes more than 200 members who will focus on addressing key challenges in AI safety, including ethical considerations, bias mitigation and the development of robust testing frameworks. Members will collaborate on research initiatives, share best practices and shape a unified framework for AI safety standards.
APL will leverage its extensive experience in research, development and deployment of assured autonomy, a field that ensures autonomous systems perform safely and effectively, even in unpredictable environments. APL’s affiliation with the IAA aligns closely with the objectives of AISIC by focusing on the research and development of autonomous systems that are reliable, safe and secure, said Bart Paulhamus, chief of APL’s Intelligent Systems Center, which oversees APL’s IAA activities.
“This partnership ensures that APL’s cutting-edge research and advancements in assured autonomy and assured AI contribute directly to AISIC’s mission to establish and promote AI safety standards,” he said. “This collaboration not only amplifies the Lab’s capabilities in AI assurance but also has the potential to actively shape the future of how autonomous technologies are developed and deployed in a manner that is both innovative and in alignment with ethical standards.”
The Laboratory’s contributions to the consortium will be instrumental in guiding policy decisions and shaping the future of AI technology, Pinelis said.
“Through our work with AISIC, we aim to foster a culture of responsibility and innovation in AI development,” she added. “Our goal is to create systems that not only enhance the nation’s technological capabilities but do so in a way that is ethically sound, consistent with our democratic values, and beneficial to society as a whole. By joining with other leaders in the field, APL is poised to make significant contributions to trustworthiness, safety and security of AI technologies, making sure they serve the public interest and contribute to a safer, more secure future.”
“As leaders in the area of safety and assurance of emerging technologies ranging from medical devices and self-driving cars to transportation systems, our researchers understand both AI’s tremendous potential and its risk,” said , executive director of the IAA. “This new consortium is a wonderful and exciting opportunity for us to bring together diverse institutes, centers and laboratories within Johns Hopkins to support the goal of assuring AI is developed and used in ways that support a safer, more prosperous and more equitable society.”