Microsoft has introduced a new bug bounty program focused on AI security within its Dynamics 365 and Power Platform. The program offers rewards of up to $30,000 for identifying critical vulnerabilities in AI models and systems. This initiative uses Microsoft’s AI-specific security framework, which classifies vulnerabilities based on potential severity. The goal is to prevent exploitation by encouraging responsible disclosure from the cybersecurity community.
The AI security classification system includes three categories: model manipulation, input exploitation, and information disclosure. Vulnerabilities like prompt injection and input perturbation target how AI models respond to inputs, without needing access to model internals. Other risks such as model poisoning or data poisoning aim to compromise training data or architecture. Still others, like membership inference and model stealing, target data confidentiality or intellectual property.
Microsoft has detailed eligible systems including PowerApps, AI Builder, and Copilot Studio, highlighting AI integrations across enterprise products. High payouts are reserved for critical issues with serious impact, such as unauthorized data access or actions without user input. Researchers are supported with trial access and detailed product documentation for effective testing. Even non-qualifying submissions may earn recognition if they contribute to enhanced security.
This effort is part of Microsoft’s broader push to secure AI as it becomes more embedded in enterprise tools. The company encourages ethical hackers to collaborate on improving AI safety through responsible vulnerability disclosure. Microsoft emphasizes that this collective defense approach is essential as AI continues to grow in complexity and usage. The program is open to all researchers who wish to help secure Microsoft’s enterprise AI solutions.
Reference: