Menu

  • Alerts
  • Incidents
  • News
  • APTs
  • Cyber Decoded
  • Cyber Hygiene
  • Cyber Review
  • Cyber Tips
  • Definitions
  • Malware
  • Threat Actors
  • Tutorials

Useful Tools

  • Password generator
  • Report an incident
  • Report to authorities
No Result
View All Result
CTF Hack Havoc
CyberMaterial
  • Education
    • Cyber Decoded
    • Definitions
  • Information
    • Alerts
    • Incidents
    • News
  • Insights
    • Cyber Hygiene
    • Cyber Review
    • Tips
    • Tutorials
  • Support
    • Contact Us
    • Report an incident
  • About
    • About Us
    • Advertise with us
Get Help
Hall of Hacks
  • Education
    • Cyber Decoded
    • Definitions
  • Information
    • Alerts
    • Incidents
    • News
  • Insights
    • Cyber Hygiene
    • Cyber Review
    • Tips
    • Tutorials
  • Support
    • Contact Us
    • Report an incident
  • About
    • About Us
    • Advertise with us
Get Help
No Result
View All Result
Hall of Hacks
CyberMaterial
No Result
View All Result
Home News

Employees Keep Feeding AI Secrets

September 9, 2025
Reading Time: 3 mins read
in News
Employees Keep Feeding AI Secrets

A significant gap exists between what organizations believe about their AI security and the reality of their safeguards. While many companies rely on training sessions or warnings to prevent employees from sharing sensitive data, a mere 17% have the technological controls in place to block or scan uploads to public AI tools. This leaves the vast majority of organizations vulnerable, as employees often use unmonitored devices to share customer records, financial information, or even credentials with chatbots. Once this data enters an AI system, it is nearly impossible to retrieve and could be embedded in training models for years, creating unpredictable security risks.

This issue is made worse by a dangerous overconfidence among leaders. A third of executives mistakenly believe their company has full visibility into all AI usage, but in reality, only 9% have functional governance systems. This disparity between perception and reality means organizations are largely unaware of just how much sensitive information their employees are exposing. Without visibility, they cannot track or control data flows, making them blind to potential threats.

The problem has serious compliance implications. Regulators worldwide are rapidly creating new rules for AI, with U.S. agencies issuing 59 new AI-related regulations in 2024 alone. Despite this trend, only 12% of companies see compliance violations as a major concern. Without the ability to track what employees upload to chatbots, organizations can’t meet the requirements of regulations like GDPR, which mandates a record of all processing activities, or HIPAA, which requires audit trails for patient data. This lack of visibility makes it impossible to comply with these and other critical regulations, such as SOX.

In practice, this means most companies are unable to answer fundamental questions, like which AI tools hold their customer data or how to delete it if a regulator requests it. Every employee’s use of a chatbot could, in effect, become a compliance failure. This highlights a critical need for a new approach to data security in the age of AI.

For CISOs, the report’s findings underscore two key priorities: implementing technical controls and addressing compliance. First, technical safeguards, such as blocking sensitive data uploads and scanning content before it reaches AI platforms, must become a foundational security practice. While employee training is helpful, the data shows it is not a sufficient standalone solution. Second, CISOs must demonstrate that their organizations can see and control how data moves into AI systems. Regulators are already issuing penalties, and showing a proactive approach to AI governance is now an essential part of a robust security strategy.

Reference:

  • Employees Continue Feeding Ai Tools With Secrets They Cannot Take Back
Tags: Cyber NewsCyber News 2025Cyber threatsSeptember 2025
ADVERTISEMENT

Related Posts

Employees Keep Feeding AI Secrets

Signal Adds Secure Cloud Backups

September 9, 2025
Employees Keep Feeding AI Secrets

Spamgpt AI Tool Powers Phishing Attack

September 9, 2025
Maduro Claims Huawei Phone Cannot Be Hacked

Police Disrupts Streameast Piracy Site

September 9, 2025
Maduro Claims Huawei Phone Cannot Be Hacked

Texas Sues PowerSchool Over Data Breach

September 9, 2025
Maduro Claims Huawei Phone Cannot Be Hacked

Maduro Claims Huawei Phone Cannot Be Hacked

September 9, 2025
SAP S4hana Exploited Vulnerability

US Allies Push For Sboms In Security

September 5, 2025

Latest Alerts

Windows Defender Flaw Enables Hijack

Npm Packages Compromised In Attack

GPUGate Abuse of Google Ads and GitHub

iCloud Calendar Used For Phishing Emails

Czech Cyber Agency Warns On Chinese Tech

Atomic Stealer Masquerades As Cracked App

Subscribe to our newsletter

    Latest Incidents

    Hackers Steal Secrets In GitHub Attack

    Plex Users Told To Reset Passwords

    Lovesac Confirms Breach After Attack

    Azure Cloud Hit By Red Sea Cable Cuts

    Tenable Confirms Breach Of Customer Data

    US Probes Malicious Email On China Talks

    CyberMaterial Logo
    • About Us
    • Contact Us
    • Jobs
    • Legal and Privacy Policy
    • Site Map

    © 2025 | CyberMaterial | All rights reserved

    Welcome Back!

    Login to your account below

    Forgotten Password?

    Retrieve your password

    Please enter your username or email address to reset your password.

    Log In

    Add New Playlist

    No Result
    View All Result
    • Alerts
    • Incidents
    • News
    • Cyber Decoded
    • Cyber Hygiene
    • Cyber Review
    • Definitions
    • Malware
    • Cyber Tips
    • Tutorials
    • Advanced Persistent Threats
    • Threat Actors
    • Report an incident
    • Password Generator
    • About Us
    • Contact Us
    • Advertise with us

    Copyright © 2025 CyberMaterial