ĐÓ°ÉĘÓƵ

Robust and Resilient AI

Robust and Resilient Artificial Intelligence

Developing intelligent systems for missions characterized by uncertain and adversarial environments

Our Contribution

Scientists and engineers in APL’s Intelligent Systems Center (ISC) work to enable confidence in intelligent systems for critical national security applications through research in uncertainty-aware risk sensitivity, adversarial vulnerabilities and defenses, fairness and privacy, and testing and evaluation.

Research

Uncertainty-Aware Risk-Sensitive AI

ISC researchers are developing fundamentally new techniques to enable AI to operate in a dynamic and unpredictable world. These include uncertainty-aware control policies that adapt to stochastic changes in operating conditions and out-of-distribution settings, as well as risk-sensitive deep reinforcement learning techniques that allow agents to prioritize competing mission objectives.

Related Publications

  • Kapil, K. D., I-J. Wang, G. D. Hager, “,” 2021 IEEE International Conference on Robotics and Automation (ICRA) (2021).
  • Markowitz, J., M. Chau, I-J. Wang, “,” Workshop on Artificial Intelligence Safety 2021 (SafeAI 2021) co-located with AAAI 2021 (2021).
  • Kapil, K. D., G. D. Hager, C.-M. Huang, “,” 2020 IEEE International Conference on Robotics and Automation (ICRA) (2020).

Adversarial Vulnerabilities and Defenses

TrojAI researcher Neil Fendley demonstrates a backdoor he embedded in the deep network weights of a common network used for object detection and classification. The network classifies dozens of objects correctly, but when a person puts the embedded trigger—in this case the black and white target sticker—on their clothes, the system immediately misidentifies them as a teddy bear. The backdoor is very specific: When placed on other objects—like the chair—the trigger has no impact, and the network makes correct classifications.

ISC researchers analyze vulnerabilities and defenses of critical AI applications relative to system-level performance and operational constraints across the entire development life cycle. Recent ISC projects studied the sensitivity of adversarial attacks on computer vision systems to physical constraints, general approaches for detecting adversarial inputs to deep learning models, methods for evaluating vulnerabilities to backdoor Trojan attacks at scale, and techniques for “sanitizing” deep networks infected by data poisoning.

Related Publications

  • K. Karra and C. Ashcraft, arXiv preprint arXiv:2019.04566.
  • N. Drenkow, N. Fendley, and P. Burlina, IEEE/CVF Winter Conference on Applications of Computer Vision, 472482, 2022.
  • M. Lennon, N. Drenkow, and P. Burlina, IEEE/CVF International Conference on Computer Vision, 112-121, 2021.
  • N. Fendley, M. Lennon, I-J. Wang, P. Burlina, and N. Drenkow, European Conference on Computer Vision, 105-119, 2020.
  • K. Karra, C. Ashcraft, and N. Fendley, arXiv preprint arXiv:2003.07233.

Testing and Evaluation of Intelligent Systems

A core mission of the ISC is rigorous testing and evaluation of fundamentally new AI and autonomy to address critical national challenges, integrating APL’s trusted technical advisor role with a leading, interdisciplinary research program in AI, robotics, and autonomy. Novel datasets, benchmarks, metrics, and evaluation frameworks and tools are regularly released.

Related Publications

  • Johnson, E. C., E. Q. Nguyen, B. Schreurs, C. S. Ewulum, C. Ashcraft, N. M. Fendley, M. M. Baker, A. New, G. K. Vallabha, “L2Explorer: A Lifelong Reinforcement Learning Assessment Environment,” (2022).
  • Fendley, N., C. Costello, E. Nguyen, G. Perrotta, C. Lowman, “Continual Reinforcement Learning with TELLA,” Conference on Lifelong Learning Agents (CoLLAs) 2022, . (2022).
  • New, A., M. Baker, E. Nguyen, G. Vallabha, “Lifelong Learning Metrics,” . (2022).

AI Fairness and Privacy

Ensuring that intelligent systems are unbiased and maintain data privacy is yet another critical obstacle for realizing the potential of AI for national challenges.

Related Publications

  • Paul, William, Armin Hadzic, Neil Joshi, Fady Alajaji, Philippe Burlina, “TARA: Training and Representation Alteration for AI Fairness and Domain Generalization." Neural Computation, pp. 1–38 (2022).
  • Paul, William, Yinzhi Cao, Miaomiao Zhang, Phil Burlina, “Defending Medical Image Diagnostics Against Privacy Attacks Using Generative Methods: Application to Retinal Diagnostics," Clinical Image-Based Procedures, Distributed and Collaborative Learning, Artificial Intelligence for Combating COVID-19 and Secure and Privacy-Preserving Machine Learning, pp. 174–187. Springer, Cham (2021).