Google has recently unveiled its new Safety Charter in India, which will expand its AI-led developments for fraud detection. This initiative is designed to combat the rising number of sophisticated scams across the country, Google’s largest market outside the United States. Digital fraud in India is rising, with scams related to the government’s instant payment system UPI growing 85% year-over-year. With its new Safety Charter, Google now aims to address some of these very specific and costly problem areas for Indian users. To support this effort, the company has also launched its brand new security engineering center located within the country of India.
Announced at the Google for India summit last year, the new security engineering center (GSec) is the company’s fourth such center.
The other centers are located in Dublin, Munich, and also the city of Malaga, showcasing the importance of this new Indian hub. The GSec will allow Google to partner with the local community, including government, academia, students, and also small and medium enterprises. Together they will create solutions to solve cybersecurity, privacy, safety, and also various different artificial intelligence problems that are unique to India.
Google has already partnered with the Ministry of Home Affairs’ Indian Cyber Crime Coordination Centre to raise public awareness of many cybercrimes.
Globally, the technology giant Google is utilizing artificial intelligence to combat various online scams and to remove millions of advertisements. The company now aims to deploy its AI more extensively throughout India to specifically combat the recent rise in digital fraud. Google Messages, which comes preinstalled on many Android devices, uses AI-powered Scam Detection that has helped protect its users. This system has protected users from over 500 million suspicious messages a month, demonstrating its significant scale and also its effectiveness. Similarly, Google piloted its Play Protect service in India last year, which it claims has blocked nearly 60 million attempts to install high-risk apps.
Heather Adkins, a founding member of Google’s security team, said the use and misuse of AI is top of mind for her. She noted that large language models like Gemini are used as productivity enhancements by many malicious actors to make scams more believable. Google is conducting extensive testing of its AI models to ensure they understand what they should not do for any users. Alongside generative AI’s potential for abuse by hackers, Adkins sees commercial surveillance vendors as a very significant threat to users. She has stated that studying cyber threats in this region often gives a hint of what will be seen worldwide in the future.
Reference: