The adviser to the U.K. prime minister’s AI task force has expressed the need for regulation and control over large AI models within the next two years to mitigate significant existential risks.
During an interview with a local U.K. media outlet, Matt Clifford, who also serves as the chair of the government’s Advanced Research and Invention Agency (ARIA), emphasized that the current rate of advancement in AI systems is rapidly increasing their capabilities.
Clifford stressed that without considering safety and regulations now, these systems will become highly powerful within two years. He highlighted the urgency of establishing a framework that enables better control and regulation of these large AI models.
Clifford warned about the various types of risks associated with AI, describing them as “pretty scary,” both in the short and long term. His concerns align with an open letter signed by 350 AI experts, including OpenAI CEO Sam Altman, which compares the existential threat of AI to that of nuclear weapons and pandemics.
The AI task force adviser expressed the view that AI poses potential dangers that could harm humans, particularly as models are expected to significantly advance in two years. He emphasized the need for regulators and developers to prioritize understanding how to control these models and implement global regulations.
Clifford’s primary concern lies in the lack of understanding regarding the behavior of AI models. He highlighted that even those building the most capable systems admit to not fully understanding their exhibited behaviors. He emphasized the importance of subjecting powerful AI models to auditing and evaluation processes before deployment, a sentiment shared by many AI organizations.
Regulators worldwide are currently working to comprehend AI technology and its implications while aiming to create regulations that safeguard users without stifling innovation. In the European Union, officials have proposed labeling all AI-generated content to combat disinformation.
Within the U.K., a member of the opposition Labour Party has endorsed the notion of regulating technology similar to medicine and nuclear power, aligning with the sentiments expressed in the Center for AI Safety’s letter.