A growing number of major corporations, such as Walmart, T-Mobile, Chevron, AstraZeneca, Nestle, and Starbucks, are utilizing Aware AI software to analyze the private messages of their employees on communication platforms like Slack and Teams, according to CNBC. Founded in 2017, Aware extracts data from platforms including Google Drive, Zoom, and Meta workplaces. The AI analyzes digital conversations using Natural Language Processing (NLP) and Computer Vision (CV) neural network modeling, offering employers insights into campaign feedback, overall employee mood, and potential policy violations.
Aware’s models are trained on a vast dataset of tens of millions of conversations, claiming to provide more accurate insights than generic datasets. The AI takes approximately two weeks to train on employee messages when a new client joins the platform, allowing it to understand company-specific patterns. While companies retain control over employee privacy, in high-risk cases, employers can access employee identities based on flagged policy violations defined by the AI. Aware’s risk assessment report highlights issues like the sharing of sensitive information and the use of inappropriate language in employee messages, with 1 in 95 messages containing toxic speech.
However, the widespread use of AI-driven tools in monitoring workplace communication has raised concerns about employee privacy. Critics argue that while these tools offer benefits in identifying risks and policy violations, they also pose a potential threat to the privacy of employees as their messages are analyzed and, in certain cases, employers gain access to their identities.