AI revolution yet to come, regulations may hinder innovation: Meta AI chief
LLMs can deal with language because it is simple and discrete, but it cannot deal with the complexity of the real world, he explained.
SEOUL: Yann LeCun, the chief artificial intelligence (AI) scientist at global tech giant Meta Platforms, said that the "real AI revolution" has yet to come, calling on the governments not to make laws that will hinder the development of the technology.
"The real AI revolution has not yet arrived," LeCun said in an opening speech for the 2024 K-Science and Technology Global Forum in Seoul, hosted by South Korea's science ministry, reports Yonhap news agency.
"In the near future, every single one of our interactions with the digital world will be mediated by AI assistants ... and what we need eventually are systems that basically have the same level of intelligence as humans,” he noted.
The pioneer of modern AI said generative AIs based on large language models (LLMs), such as OpenAI's ChatGPT and Meta's Llama, have limits in understanding the physical world as well as reasoning and planning like humans do.
LLMs can deal with language because it is simple and discrete, but it cannot deal with the complexity of the real world, he explained.
To overcome the limits, Meta is working to build an objective-driven AI based on a new type of architecture that can understand the physical world by observing it like babies do and make predictions based on the understanding.
LeCun also stressed the importance of an open source AI ecosystem to create AI models that comprehend different languages, cultural contexts and value systems of the world.
"We can't have a single entity somewhere on the west coast of the United States train those models," he said, calling for the need for an AI system that can be trained collaboratively across the world.
The AI expert said "regulation can kill open source," urging the governments not to make laws prematurely that will hinder the advancement of the technology. "There is zero demonstration that any AI system is intrinsically dangerous, he added.