In a recent interview, Nate Fick, the State Department’s ambassador at large for cyberspace and digital policy, highlighted the urgent need for regulatory measures surrounding artificial intelligence (AI). Fick emphasized the potential risks associated with AI, China’s role in the landscape, and the vulnerabilities within internet infrastructure. He stressed that the United States has less than a year to establish a regulatory framework for generative AI and large language models.
Failing to do so, according to Fick, could result in the misuse of technology for spreading disinformation, committing violence, and executing cyberattacks. Fick expressed concerns about a potential “fifth model” of AI emerging from a less trustworthy or open-sourced source, pointing out that established companies like Google, Microsoft, Open AI, and Anthropic serve as reliable models.
The interview, conducted at the Hudson Institute in Washington, delved into the broader national conversation surrounding AI and its implications. Fick particularly highlighted the immediate risk of disinformation and misinformation, especially in the context of political discourse, expressing worries about the upcoming presidential election and the challenges AI poses in discerning truth from fiction. Additionally, he expressed deeper concerns about the application of AI in lethal terms, including autonomous weapons, biotechnology, and cybersecurity, underscoring the need for regulatory and governance infrastructure to mitigate these risks.