RAND Report: Adversarial Attacks Against AI Systems Pose Less Risk Than Previously Thought
The report, titled “Operational Feasibility of Adversarial Attacks Against Artificial Intelligence,” discusses the feasibility and effectiveness of adversarial attacks against artificial intelligence (AI) systems, specifically those used by the U.S. Department of Defense (DoD).
The report notes that there is a large body of academic literature on adversarial attacks against AI systems, which suggests that such attacks pose a significant risk to DoD applications. However, the report’s authors argue that many of the proposed attack methods are operationally infeasible due to high knowledge requirements and impractical attack vectors. They suggest that non-adversarial techniques, such as fusing data and predictions across sensor modalities and image resolutions, can be less expensive, more practical, and often more effective.
The report provides several recommendations for the DoD to mitigate the risks of adversarial attacks against AI systems, including:
- Assessing the risk of AI models to adversarial attacks by considering how adversaries can feasibly influence models, as well as estimating the costs associated with adversary actions.
- Maintaining situational awareness of academic state-of-the-art techniques to attack AI in real-world scenarios and understanding how these technologies feasibly affect concepts of operation for both the DoD and its adversaries.
- Developing robust AI models, preprocessing techniques, and proper data fusion systems to vastly increase the resource costs to an adversary to perform an attack.
- Investing in responsive support teams for AI systems to quickly detect, identify, and respond to adversarial threats.
Overall, the report suggests that while adversarial attacks against AI systems are a potential threat, they are not as significant a risk as some academic research implies. However, the DoD should take steps to protect its AI systems against such attacks, using both technical and organizational strategies.
Reference: https://www.rand.org/pubs/research_reports/RRA866-1.html