OpenAI CEO ready to quit EU if law puts AI in ‘high risk’ zone
The law is undergoing revisions and may require large AI models like OpenAI’s ChatGPT and GPT-4 to be designated as “high risk”, Time reported.
LONDON: OpenAI CEO Sam Altman has threatened to quit the European Union (EU) if regulators continue with its crucial artificial intelligence (AI) law in its current form.
The law is undergoing revisions and may require large AI models like OpenAI’s ChatGPT and GPT-4 to be designated as “high risk”, Time reported.
Speaking on the sidelines of a panel discussion at University College London, Altman said they could “cease operating” in the EU if unable to comply with the new AI legislation.
“Either we’ll be able to solve those requirements or not. If we can comply, we will, and if we can’t, we’ll cease operating. We will try. But there are technical limits to what’s possible,” Altman was quoted as saying.
“We’re going to try to comply,” he added. OpenAI’s skepticism is centred on the EU law’s designation of “high risk” AI systems. Altman said he was worried about the risks stemming from AI. For example, AI-generated disinformation could have an impact on the upcoming 2024 US election, he warned. However, social media platforms were more important drivers of disinformation than AI language models. “You can generate all the disinformation you want with GPT-4, but if it’s not being spread, it’s not going to do much,” he was quoted as saying in the report.
Earlier this week, the OpenAI CEO said now is a good time to start thinking about the governance of superintelligence -- future AI systems dramatically more capable than even artificial generative intelligence (AGI).
Altman stressed that the world must mitigate the risks of today’s AI technology too, “but superintelligence will require special treatment and coordination”.