The US National Institute of Standards and Technology (NIST) has highlighted significant challenges in securing AI and machine learning (ML) systems. In a recent report, NIST warned that the data-driven nature of ML systems introduces new vulnerabilities, making them more prone to adversarial attacks than traditional software. These attacks target different stages of ML operations, including data manipulation, malicious inputs, and model modifications. NIST noted that such attacks are becoming more sophisticated and impactful as AI systems are deployed in various sectors worldwide.
The report stressed the importance of improving mitigations for adversarial ML (AML) attacks.
It emphasized that existing defenses have significant limitations, particularly with regard to balancing security and accuracy. NIST noted that AI systems optimized for accuracy often lack robustness against adversarial attacks and fairness. The trade-off between developing open, fair AI systems and ensuring their security remains a key research challenge, with organizations needing to prioritize one property over the other depending on their use case.
Detecting AML attacks remains a significant challenge, as adversarial examples can closely mimic the training data, making detection difficult. NIST highlighted that applying formal verification methods to AI models can be costly and may hinder widespread adoption. More research is needed to extend verification methods to the algebraic operations in ML algorithms to reduce these costs.
Additionally, the lack of reliable benchmarks to test the effectiveness of AML mitigations adds another layer of complexity to securing AI systems.
NIST also pointed out that the limitations of current AI mitigations require organizations to consider broader risk management practices beyond adversarial testing. The report emphasized the importance of understanding risk tolerance levels when evaluating AI systems, as the risk varies depending on the application. While NIST did not provide specific recommendations for assessing these risks, the agency stressed the need for more research and standardized benchmarks to improve the reliability of AI security measures.
Reference: