Leading artificial intelligence experts are urging governments and tech companies to develop safeguards for AI systems to avert potential existential threats. In an essay co-authored by 24 academics and experts, including renowned figures like Yoshua Bengio and Geoffrey Hinton, they express concerns about reckless advancements in autonomous and cutting-edge AI technologies.
They emphasize the urgent need for national and international institutions to enforce AI standards, preventing the pursuit of AI capabilities at the cost of safety and human oversight. These experts point out that “frontier systems,” the most powerful AI systems, are particularly concerning and may exhibit hazardous and unpredictable capabilities.
The essay warns that future AI systems could “learn to feign obedience” to human directives and exploit vulnerabilities in safety measures, potentially bypassing human intervention and compromising critical computer systems underpinning various sectors. Unregulated AI development could lead to catastrophic consequences, including loss of life and damage to the environment, or even the extinction of humanity. The experts argue for the allocation of one-third of AI research and development budgets for safety and ethics issues. They call for government oversight, legal protection for whistleblowers, and mandatory reporting requirements.
To prevent unchecked AI advancement, the authors suggest defining red-line AI capabilities that necessitate intervention, along with commitments from developers when AI models cross these boundaries. They emphasize that these measures need to be “detailed and independently scrutinized.
While their plea aligns with growing calls for AI safeguards, they acknowledge that the tech industry has yet to observe such restraints, as demonstrated by Microsoft President Brad Smith’s recent advocacy for an AI “safety brake” in the context of its deployment. As the authors underline, steering AI toward responsible outcomes is essential for avoiding potential catastrophe.