Lasso Security researchers have uncovered a significant security lapse on Hugging Face, a platform popular among AI enthusiasts, where over 1,500 exposed API tokens belonging to tech giants like Meta, Microsoft, Google, and VMware were found. This vulnerability could potentially lead to supply chain attacks, allowing attackers to modify files in account repositories. Of the exposed tokens, 655 had write permissions, enabling the alteration of files. The potential consequences of exploiting these tokens include data theft, poisoning training data, or stealing AI models, affecting more than 1 million users. The exposed API tokens were detected through substring searches on Hugging Face, revealing a vulnerability that could compromise the foundational models relied upon by millions of users. The researchers, by gaining access to more than 10,000 private models, demonstrated the severity of the breach.
The situation underscores the importance of securing API tokens, as compromised models could be manipulated into malicious entities, posing a serious threat to the user base. Developers often inadvertently expose tokens when storing them in variables, emphasizing the need for robust security measures and practices. GitHub has a Secret Scanning feature to prevent such leaks, but Hugging Face’s platform revealed a weakness in its organization API tokens. Even though the org_api tokens were announced as deprecated, the researchers found a way to exploit them for read access to repositories and billing access to resources. This discovery highlights the importance of ongoing vigilance and proactive measures to secure API tokens and prevent unauthorized access and potential malicious activities on platforms critical to the AI and machine learning community.
Reference: