Protect AI, a leading AI and machine learning (ML) security company, has taken a significant step towards enhancing cybersecurity in the realm of artificial intelligence. The company recently introduced a dedicated platform, Huntr, designed exclusively for detecting vulnerabilities in AI and ML systems. This move comes after Protect AI’s acquisition of the Huntr.dev platform, which incentivizes security researchers to uncover vulnerabilities in open-source software.
With a distinct focus on AI/ML threat research, the new Huntr platform aims to create a robust community of security experts capable of addressing the critical issue of security in the AI supply chain.
Ian Swanson, CEO of Protect AI, emphasized the escalating risk posed by the expansive AI and ML supply chain and highlighted the underinvestment in the security-AI intersection.
Swanson outlined the primary objective of the new platform as nurturing an engaged network of security researchers to meet the escalating demand for identifying vulnerabilities within AI and ML models and systems. Swanson pledged to offer the “highest paying AI/ML bounties” to incentivize researchers’ participation in this critical endeavor.
In an effort to kickstart this initiative, Protect AI’s inaugural contest on the Huntr platform will center around HuggingFace Transformers, a prominent AI community and Machine Learning platform. With a substantial $50,000 reward, this challenge aims to attract skilled researchers to proactively strengthen AI/ML security.
Through active engagement with the AI/ML open-source-focused bug bounty platform, security researchers can expand their expertise in AI/ML security, seize new professional opportunities, and earn well-deserved financial rewards, as highlighted in Protect AI’s press release. Protect AI’s recent funding achievements, including a $35 million investment in July and a total of $48.5 million raised, underscore the company’s dedication to safeguarding ML systems and AI applications against unique security vulnerabilities.