In the final quarter of 2024, global consumers were confronted with over one billion fraudulent calls, a significant increase from the previous quarter. According to Hiya’s Q4 2024 Global Call Threat Report, a substantial portion of these calls, roughly 22%, were categorized as nuisance calls, while 9% were identified as fraud. The report reveals a sharp rise in AI-powered deepfake technology used in these scams, leading to growing concerns about the impact of these highly convincing fraud tactics. The study, based on a survey of 12,000 global consumers and call data from Hiya’s Voice Intelligence Network, paints a troubling picture of the scale of this threat.
The rise of deepfake calls was particularly alarming, with 31% of Americans and 25% of Britons exposed to these scams. Among those affected, a significant number fell victim to these frauds, with 40% of Brits and 45% of Americans admitting they were tricked by deepfake technology. Of those targeted, many reported losing money (35% in the U.S. and 34% in the U.K.), while a similar percentage had personal information stolen. The average financial loss due to voice-based fraud was $539 in the U.S., £595 in the U.K., and CA$1479 in Canada.
The data showed notable variations in the frequency of spam calls across different countries. For example, Germans received the fewest spam calls at an average of three per person each month, while people in Brazil and Chile averaged 28 calls. In Spain and France, the volume of nuisance calls was higher, with 15 calls per person in these countries. Despite the rising trend of deepfake-related fraud, the U.K. recorded relatively low rates of spam calls, although the number of people losing money due to deepfakes remained significant.
Deepfake calls in Q4 2024 were predominantly related to banking and financial services (11%), followed by insurance, holiday bookings, and delivery services (all 8%). These figures indicate that fraudsters are increasingly focusing on sectors that are critical to consumers’ financial well-being. A study from UCL in 2023 found that people could not distinguish deepfake speech from real human voices 27% of the time, highlighting the growing difficulty of detecting these fraudulent calls. This emphasizes the need for consumers to remain vigilant and for enhanced protections to combat the rising threat of AI-driven scams.