Slack users raised concerns after discovering that their private data was being utilized to train AI models. The controversy emerged when users found that Slack’s AI tools were employing their messages and chat content for training purposes. Complaints surfaced on forums like Hacker News, highlighting issues with default opt-in policies and the lack of easily accessible opt-out options. Users expressed worries about privacy violations and potential conflicts with data protection regulations like GDPR.
In response to the backlash, Slack clarified its policies, asserting that it does not train large language models (LLMs) on customer data. Instead, it operates solely on data that users can already access and adheres to enterprise-grade security and compliance standards. However, users noted discrepancies between Slack’s claims and the language in the AI privacy policy, prompting further scrutiny.
Following scrutiny and user feedback, Slack updated its policies to address concerns regarding data usage for AI training. The revised policy explicitly states that Slack utilizes generative AI for its products by leveraging third-party LLMs, with no customer data used to train these models. Additionally, Slack emphasizes that it hosts AI models on its own infrastructure, ensuring that LLM providers have no access to customer data, thus enhancing data security and privacy for users.
Despite the controversy, Slack remains a widely-used communication and productivity platform, boasting millions of daily and monthly active users. The incident underscores the importance of transparency and accountability in handling user data, particularly in the context of AI development and data privacy regulations.