A newly discovered attack technique called “Sleepy Pickle” poses a significant threat to machine learning models. This technique, identified by security researcher Boyan Milanov, exploits the Pickle serialization format used to package and distribute ML models. Sleepy Pickle allows attackers to insert payloads into pickle files, enabling them to alter model behavior and potentially generate harmful outputs or misinformation.
The attack method targets the ML model itself rather than the underlying system, posing a severe supply chain risk to organizations. Sleepy Pickle works by inserting a payload into pickle files using open-source tools like Fickling, then delivering it to a target host through various techniques such as phishing or supply chain compromise. When deserialized on the victim’s system, the payload is executed, modifying the model to insert backdoors or tamper with processed data.
This attack technique allows threat actors to maintain surreptitious access to ML systems, evading detection by compromising the model when the pickle file is loaded. It demonstrates that advanced model-level attacks can exploit supply chain weaknesses and dynamically alter model behavior without requiring the direct upload of a malicious model. Sleepy Pickle broadens the attack surface, as control over any pickle file in the target organization’s supply chain is sufficient to compromise their models.
Reference: