Threat researchers have unveiled a novel cyber-attack leveraging cloaked emails to outsmart machine learning (ML) systems, infiltrating corporate networks. Termed the “Conversation Overflow” attack, this tactic sidesteps robust security measures, delivering phishing messages directly into recipients’ inboxes. These malicious emails are composed of two distinct segments: a visible portion prompting action and a hidden section containing benign text aimed at deceiving ML algorithms.
The concealed text, interspersed with blank lines, mimics typical email content, tricking ML systems into categorizing the email as authentic and allowing it to evade security checks. SlashNext researchers have observed repeated instances of this technique, suggesting its deployment by threat actors to bypass AI and ML security platforms. Unlike conventional security methods that target known malicious signatures, ML systems identify anomalies from familiar communication patterns, making them susceptible to exploitation by cybercriminals.
Upon successful infiltration, attackers employ credential theft messages camouflaged as legitimate re-authentication requests, often targeting high-ranking executives. The stolen credentials fetch lucrative prices on underground forums, posing a grave challenge to advanced AI and ML defenses. SlashNext’s advisory underscores the urgency for security teams to fortify their AI and ML algorithms, provide regular training, and implement multi-layered authentication protocols to counter such evolving cyber threats.