Silicon Valley startup Trust Lab, founded by a former head of Trust and Safety at Google, has secured $15 million in venture capital funding to develop AI-powered technology aimed at detecting and monitoring harmful content on the internet.
The Series A funding round was led by U.S. Venture Partners (USVP) and Foundation Capital, both prominent investors in the cybersecurity startup space. Trust Lab positions itself as an outsourced moderation provider, offering a comprehensive suite of tools for monitoring, compliance, and enforcement of harmful and illegal content at scale.
Trust Lab’s technology employs AI-enabled classifiers and rules engines to detect and monitor harmful content and actors. The company’s solution is already being utilized by government agencies in Europe, In-Q-Tel, and several leading social media platforms, messaging services, and online marketplaces.
In a notable collaboration, Trust Lab partnered with the European Commission last year to evaluate the dissemination of terrorism and extreme violence on social media, conducting a 40-week project across eight European markets to track and measure the spread of harmful content.
One of the key features of Trust Lab’s technology is its ability to identify harmful content and actors across different platforms using “digital fingerprints” and network graphs. This capability enables comprehensive monitoring and detection of harmful activities and provides valuable insights for enforcement and mitigation strategies.
With its innovative AI-powered approach and strategic partnerships, Trust Lab aims to contribute to enhancing online safety and combating the proliferation of harmful content on the internet.