Menu

  • Alerts
  • Incidents
  • News
  • APTs
  • Cyber Decoded
  • Cyber Hygiene
  • Cyber Review
  • Cyber Tips
  • Definitions
  • Malware
  • Threat Actors
  • Tutorials

Useful Tools

  • Password generator
  • Report an incident
  • Report to authorities
No Result
View All Result
CTF Hack Havoc
CyberMaterial
  • Education
    • Cyber Decoded
    • Definitions
  • Information
    • Alerts
    • Incidents
    • News
  • Insights
    • Cyber Hygiene
    • Cyber Review
    • Tips
    • Tutorials
  • Support
    • Contact Us
    • Report an incident
  • About
    • About Us
    • Advertise with us
Get Help
Hall of Hacks
  • Education
    • Cyber Decoded
    • Definitions
  • Information
    • Alerts
    • Incidents
    • News
  • Insights
    • Cyber Hygiene
    • Cyber Review
    • Tips
    • Tutorials
  • Support
    • Contact Us
    • Report an incident
  • About
    • About Us
    • Advertise with us
Get Help
No Result
View All Result
Hall of Hacks
CyberMaterial
No Result
View All Result
Home Alerts

Critical AI Model Flaws Threaten Security

October 30, 2024
Reading Time: 2 mins read
in Alerts
Critical AI Model Flaws Threaten Security

Researchers have recently disclosed a significant number of vulnerabilities across multiple open-source AI and machine learning (ML) models, potentially exposing systems to high-risk attacks, including remote code execution and unauthorized access to sensitive data. The flaws, identified through Protect AI’s Huntr bug bounty platform, were found in widely used tools such as ChuanhuChatGPT, Lunary, and LocalAI, highlighting critical gaps in AI software security. With a little over three dozen vulnerabilities uncovered, some carry CVSS scores as high as 9.1, emphasizing the urgent need for updated security measures across open-source AI projects.

Among the most severe vulnerabilities identified are two flaws in Lunary, a popular toolkit for large language models (LLMs). These include CVE-2024-7474 and CVE-2024-7475, both rated 9.1 on the CVSS scale. CVE-2024-7474 is an Insecure Direct Object Reference (IDOR) vulnerability that could allow an authenticated user to access or delete other users’ data, putting sensitive information at risk. Similarly, CVE-2024-7475 is an access control vulnerability allowing attackers to alter the system’s SAML configuration, making it possible for unauthorized users to log in and gain access to private information. A third IDOR flaw in Lunary (CVE-2024-7473) allows malicious actors to alter user prompts by adjusting a user-controlled parameter, further compromising user integrity.

ChuanhuChatGPT, another widely used tool, suffers from a critical path traversal vulnerability (CVE-2024-5982) that allows attackers to execute arbitrary code. This flaw, located in the model’s user upload feature, could also enable malicious actors to create unauthorized directories and expose confidential data. LocalAI, a self-hosted LLM platform, was found to have two security flaws: one allows code execution via malicious file uploads (CVE-2024-6983), while the other enables attackers to perform timing attacks to guess API keys by measuring response times (CVE-2024-7010). Such timing attacks, a form of side-channel attack, make it possible for attackers to infer API keys, increasing the risk of unauthorized access.

Adding to the urgency of addressing these issues, vulnerabilities were also found in the Deep Java Library (DJL), where an arbitrary file overwrite flaw (CVE-2024-8396) can lead to remote code execution. In parallel, NVIDIA recently issued patches for its NeMo generative AI framework to mitigate a path traversal flaw (CVE-2024-0129) that could result in code execution and data tampering. To help tackle these challenges, Protect AI introduced Vulnhuntr, an open-source static code analyzer powered by large language models, designed to identify zero-day vulnerabilities in Python codebases. By breaking down code into manageable parts and analyzing potential threats across entire function chains, Vulnhuntr provides a powerful tool for developers to secure AI/ML models against emerging threats.

Reference:
  • Critical Security Vulnerabilities in Open-Source AI Models Put Systems at Risk
Tags: AIChuanhuChatGPTCyber AlertsCyber Alerts 2024Cyber threatsLocalAILunaryMachine LearningOctober 2024open sourceresearchersVulnerabilities
ADVERTISEMENT

Related Posts

New Godfather Trojan Hijacks Banking Apps

Winos 4.0 Malware Hits Taiwan Via Tax Phish

June 20, 2025
New Godfather Trojan Hijacks Banking Apps

New Godfather Trojan Hijacks Banking Apps

June 20, 2025
New Godfather Trojan Hijacks Banking Apps

New Amatera Stealer Delivered By ClearFake

June 20, 2025
Fake Invoices Deliver Sorillus RAT In Europe

Fake Minecraft Mods On GitHub Spread Malware

June 19, 2025
Russian Phishing Scam Bypasses Google 2FA

Russian Phishing Scam Bypasses Google 2FA

June 19, 2025
Fake Invoices Deliver Sorillus RAT In Europe

Fake Invoices Deliver Sorillus RAT In Europe

June 19, 2025

Latest Alerts

Winos 4.0 Malware Hits Taiwan Via Tax Phish

New Amatera Stealer Delivered By ClearFake

New Godfather Trojan Hijacks Banking Apps

Fake Minecraft Mods On GitHub Spread Malware

Fake Invoices Deliver Sorillus RAT In Europe

Russian Phishing Scam Bypasses Google 2FA

Subscribe to our newsletter

    Latest Incidents

    Massive Leak Exposes 16 Billion Credentials

    Tonga Health System Down After Ransomware

    Chinese Spies Target Satellite Giant Viasat

    German Dealer Leymann Hacked Closes Stores

    Hacker Mints $27M From Meta Pool Gets 132K

    UBS and Pictet Hit By Vendor Data Breach

    CyberMaterial Logo
    • About Us
    • Contact Us
    • Jobs
    • Legal and Privacy Policy
    • Site Map

    © 2025 | CyberMaterial | All rights reserved

    Welcome Back!

    Login to your account below

    Forgotten Password?

    Retrieve your password

    Please enter your username or email address to reset your password.

    Log In

    Add New Playlist

    No Result
    View All Result
    • Alerts
    • Incidents
    • News
    • Cyber Decoded
    • Cyber Hygiene
    • Cyber Review
    • Definitions
    • Malware
    • Cyber Tips
    • Tutorials
    • Advanced Persistent Threats
    • Threat Actors
    • Report an incident
    • Password Generator
    • About Us
    • Contact Us
    • Advertise with us

    Copyright © 2025 CyberMaterial