European lawmakers have overwhelmingly voted in favor of imposing restrictions on the artificial intelligence (AI) industry, including obligations for generative AI model makers to address societal risks and banning several applications like biometric recognition in public spaces.
The European Parliament’s approval of the AI Act sets the stage for final negotiations, known as the trilogue, involving the Parliament, the European Commission, and the European Council. The move reflects Europe’s commitment to balancing technological advancements with fundamental rights, according to European Parliament President Roberta Metsola.
While the AI industry in Silicon Valley continues to advance, a recent analysis by McKinsey reveals that generative AI has the potential to automate tasks currently consuming a significant portion of employees’ time across various professions.
However, concerns have been raised by cybersecurity practitioners about generative AI’s ability to enhance the credibility of phishing attacks and empower low-level criminals in coding. On the other hand, some proponents argue that AI can be leveraged to identify and combat cybercrime effectively.
Although Brussels is at the forefront of AI regulation, other global power centers are also considering similar measures. The Biden administration is exploring the establishment of an “AI accountability ecosystem,” while Beijing is preparing a censorship regime for generative AI. The European proposal categorizes AI systems based on their risks and expands the list of banned applications to include biometric identification systems in public spaces, bulk scraping for facial recognition databases, and systems that categorize individuals based on physical traits or inferred attributes.
High-risk AI systems, used in critical infrastructure, law enforcement, or the workplace, would be subject to stringent requirements, including registration, risk assessment, human oversight, and process documentation.
The introduction of ChatGPT and other foundational AI models prompted lawmakers to incorporate new provisions. Developers must now demonstrate that their products have reduced risks to health, safety, fundamental rights, the environment, democracy, and the rule of law before market introduction.
They are also required to disclose detailed summaries of copyrighted data used to train the model. Trilogue talks are set to commence with the aim of reaching an agreement before January, further shaping the regulatory landscape for AI in Europe.