Google Cloud has launched Security AI Workbench, a cybersecurity suite that utilizes generative AI models to enhance visibility into the threat landscape. The suite is powered by Sec-PaLM, a specialized large language model that is fine-tuned for security use cases.
Its purpose is to use the latest advances in AI to improve point-in-time incident analysis, threat detection, and analytics to counter and prevent new infections by providing trusted, relevant, and actionable intelligence.
The Security AI Workbench includes a range of AI-powered tools, such as VirusTotal Code Insight and Mandiant Breach Analytics for Chronicle, which analyze potentially malicious scripts and alert customers of active breaches in their environments.
Users can interact with the suite using a conversational interface to search, analyze, and investigate security data, reducing mean time-to-respond and quickly determining the full scope of events.
Additional features of the suite include the Code Insight feature in VirusTotal, which generates natural language summaries of code snippets to detect and mitigate potential threats, as well as Security Command Center AI that provides operators with near-instant analysis of findings and potential attack paths, impacted assets, and recommended mitigations.
Google is also using machine learning models to detect and respond to API abuse and business logic attacks, where an adversary weaponizes a legitimate functionality to achieve a nefarious goal without triggering a security alert.
Google Cloud has built Security AI Workbench on Google Cloud’s Vertex AI infrastructure, allowing customers to control their data with enterprise-grade capabilities such as data isolation, data protection, sovereignty, and compliance support.
The announcement follows the creation of a new unit called Google DeepMind, which brings together the AI research groups from DeepMind and the Brain team from Google Research to “build more capable systems more safely and responsibly.”
The launch also follows GitLab’s plans to integrate AI into its platform to help developers avoid false positives during security testing and prevent the leaking of access tokens.