Menu

  • Alerts
  • Incidents
  • News
  • APTs
  • Cyber Decoded
  • Cyber Hygiene
  • Cyber Review
  • Cyber Tips
  • Definitions
  • Malware
  • Threat Actors
  • Tutorials

Useful Tools

  • Password generator
  • Report an incident
  • Report to authorities
No Result
View All Result
CTF Hack Havoc
CyberMaterial
  • Education
    • Cyber Decoded
    • Definitions
  • Information
    • Alerts
    • Incidents
    • News
  • Insights
    • Cyber Hygiene
    • Cyber Review
    • Tips
    • Tutorials
  • Support
    • Contact Us
    • Report an incident
  • About
    • About Us
    • Advertise with us
Get Help
Hall of Hacks
  • Education
    • Cyber Decoded
    • Definitions
  • Information
    • Alerts
    • Incidents
    • News
  • Insights
    • Cyber Hygiene
    • Cyber Review
    • Tips
    • Tutorials
  • Support
    • Contact Us
    • Report an incident
  • About
    • About Us
    • Advertise with us
Get Help
No Result
View All Result
Hall of Hacks
CyberMaterial
No Result
View All Result
Home Alerts

Malicious Models Discovered on Hugging Face

February 7, 2025
Reading Time: 2 mins read
in Alerts
Microsoft Warns of ViewState Code Injection

Researchers from ReversingLabs have discovered a concerning security vulnerability on the Hugging Face platform, where malicious machine learning models were found exploiting weaknesses in the Pickle file serialization format. Pickle, a popular Python module used to serialize and deserialize objects, poses significant security risks because it allows arbitrary code execution during deserialization. The models identified on Hugging Face were stored in PyTorch format as compressed Pickle files, with malicious payloads embedded at the beginning of the Pickle stream. This tactic allowed the payload to execute before the file’s integrity was compromised, effectively evading Hugging Face’s security tools.

The malicious models, dubbed “nullifAl,” were crafted to bypass security detection by initiating harmful code before being flagged. ReversingLabs researchers found that these models contained code capable of executing on unsuspecting systems, compromising security through the Pickle format’s inherent vulnerability. The discovery highlights the increasing security risks associated with the widespread use of Pickle in collaborative AI platforms, where many developers prioritize speed and productivity over robust security measures.

Pickle files, while convenient for serializing machine learning data, can be exploited by attackers to insert malicious payloads. This makes platforms like Hugging Face particularly vulnerable, as they host machine learning models that are downloaded and used by developers worldwide. The malicious payloads in the models identified by ReversingLabs were designed to execute arbitrary commands on target systems, giving attackers access to potentially sensitive environments. The use of Pickle files in collaborative settings increases the likelihood of exposure, especially as many developers neglect to consider the associated risks.

In response to these findings, Hugging Face has taken steps to bolster its security measures, but the incident underscores the need for greater awareness of the risks inherent in using Pickle for model serialization. Developers are being advised to exercise caution when working with Pickle files, opting for safer alternatives when possible and closely monitoring systems for signs of compromise. As the AI community continues to embrace collaborative platforms, the need for innovative and secure solutions to manage the risks of shared machine learning models has never been more critical.

Reference:
  • Malicious Models Found on Hugging Face Exploiting Pickle Vulnerabilities
Tags: Cyber AlertsCyber Alerts 2025CyberattackCybersecurityFebruary 2025
ADVERTISEMENT

Related Posts

OneDrive Flaw Gives Sites Full Data Access

OneDrive Flaw Gives Sites Full Data Access

May 30, 2025
OneDrive Flaw Gives Sites Full Data Access

Fake AI Apps Drop Ransomware And Malware

May 30, 2025
OneDrive Flaw Gives Sites Full Data Access

EDDIESTEALER Uses Fake CAPTCHAs for Stealing

May 30, 2025
APT41 Uses Google Calendar For C2 Operations

APT41 Uses Google Calendar For C2 Operations

May 29, 2025
APT41 Uses Google Calendar For C2 Operations

New PumaBot IoT Botnet Uses SSH Attack

May 29, 2025
APT41 Uses Google Calendar For C2 Operations

New NodeSnake RAT Hits UK Universities

May 29, 2025

Latest Alerts

EDDIESTEALER Uses Fake CAPTCHAs for Stealing

Fake AI Apps Drop Ransomware And Malware

OneDrive Flaw Gives Sites Full Data Access

New PumaBot IoT Botnet Uses SSH Attack

APT41 Uses Google Calendar For C2 Operations

New NodeSnake RAT Hits UK Universities

Subscribe to our newsletter

    Latest Incidents

    State Actors Hit ConnectWise ScreenConnect

    Ivanti Flaw Hits NHS Staff and Patient Data

    Amalgamated Sugar Data Breach Exposes SSNs

    Cork Protocol Paused After $12M Exploit

    Victoria’s Secret Site Down After Breach

    LexisNexis GitHub Breach Affects 364K People

    CyberMaterial Logo
    • About Us
    • Contact Us
    • Jobs
    • Legal and Privacy Policy
    • Site Map

    © 2025 | CyberMaterial | All rights reserved

    Welcome Back!

    Login to your account below

    Forgotten Password?

    Retrieve your password

    Please enter your username or email address to reset your password.

    Log In

    Add New Playlist

    No Result
    View All Result
    • Alerts
    • Incidents
    • News
    • Cyber Decoded
    • Cyber Hygiene
    • Cyber Review
    • Definitions
    • Malware
    • Cyber Tips
    • Tutorials
    • Advanced Persistent Threats
    • Threat Actors
    • Report an incident
    • Password Generator
    • About Us
    • Contact Us
    • Advertise with us

    Copyright © 2025 CyberMaterial