European businesses are grappling with significant challenges in managing artificial intelligence (AI) tools, according to a recent Sapio Research Finance Pulse report. The study, which surveyed 800 consumers and 375 business decision-makers across the UK, Germany, France, and the Netherlands, reveals that while nearly all organizations acknowledge the potential risks associated with AI, only a fraction have implemented adequate controls. Data security, lack of accountability, and skills gaps are the primary concerns, but formal guidelines for acceptable AI use remain scarce.
The report highlights that just 46% of responding organizations have established formal guidance on the acceptable use of AI in the workplace. Additionally, only 48% have restrictions on the types of data that can be inputted into AI models and tools. This is particularly concerning given incidents such as Samsung’s ban on generative AI (GenAI) after sensitive data, including source code and meeting notes, was inadvertently shared by employees.
Furthermore, the study indicates that fewer than two-fifths (38%) of European companies enforce strict access controls on AI tools, and only 48% limit the use of generative AI to specific roles within the organization. These lapses in governance could expand the corporate attack surface and increase exposure to cyber risks, especially as AI tools become more integrated into business operations.
Andrew White, CEO of Sapio Research, emphasized the need for businesses to proceed cautiously with AI adoption. He warned that rapid AI integration without proper oversight could lead to serious challenges related to data privacy, employee performance, and customer satisfaction. As AI continues to evolve, implementing robust governance frameworks will be crucial for mitigating risks and ensuring secure and effective use of these technologies.
Reference: