InputSnatch Side-Channel Attack Targets LLMs
A recent study has uncovered a new side-channel attack, called “InputSnatch,” that poses a serious threat to user privacy in large language...
A recent study has uncovered a new side-channel attack, called “InputSnatch,” that poses a serious threat to user privacy in large language...
Cybersecurity researchers have uncovered a sophisticated attack, named "LLMjacking" by the Sysdig Threat Research Team, which leverages...
HiddenLayer exposes vulnerabilities in Google's Gemini large language model (LLM), revealing potential risks of system prompt leaks, harmful content generation
Protect AI has made a strategic move to enhance its capabilities with the acquisition of Laiyer AI, a prominent provider of open source software focused...
Renowned authors, including George R.R. Martin, John Grisham, and Jodi Picoult, have initiated a lawsuit against OpenAI, the developer of ChatGPT
The UK's National Cyber Security Centre (NCSC) has issued a warning about the potential cyber risks associated with large language models
A concerning cybersecurity threat emerges as threat actors manipulate paid Facebook promotions featuring Large Language Models (LLMs).
Recent developments in the field of AI language models (LLMs) have revealed vulnerabilities that challenge their efficacy in mitigating harmful content.
© 2024 | CyberMaterial | All rights reserved