What is the story about?
Artificial intelligence is the buzzword in the office, politics, diplomacy, wars or films and coding, more so after the launch of OpenAI chatbot, ChatGPT in 2022. The tech industry has since been a part of the race. Traditional Big Tech, including Meta, Google and Microsoft, or new entrees such as Anthropic, Perplexity, everybody was and still is sprinting to achieve the best of AI.
This triggers a debate: what is the future of AI?
Going by the recent chatter, tech experts have been pressing that the next-step of AI is Artificial General Intelligence (AGI). For context, AGI is a hypothetical form of artificial intelligence that can understand, learn, and apply knowledge across a wide range of tasks at a human level (or beyond).
Unlike today’s AI systems that excel at specific tasks, like generating text, recognising images, or playing chess, AGI would be capable of general reasoning and problem-solving across domains. It could learn new skills, adapt to unfamiliar situations, and perform any intellectual task a human can.
While xAI CEO Elon Musk, Meta CEO Mark Zuckerberg and OpenAI CEO Sam Altman are obsessed to reach the AGI level, Professor Yann LeCun, often called one of the ‘godfathers of AI’, has heavily criticised the move.
At the World Economic Forum (WEF) in Davos, Professor Yann LeCun, dismissed the industry’s current obsession with artificial general intelligence (AGI) as misplaced and overstated. The Meta veteran argued that AGI has been given far too much importance and that the technology powering today’s breakthroughs is fundamentally limited.
LeCun cautioned that simply scaling up large language models (LLMs) such as ChatGPT will not lead to genuine intelligence. He said the industry is chasing a mirage by assuming that bigger data and more powerful computing will automatically produce human-like understanding.
What AI truly needs, he said, is a complete shift in paradigm, not another iteration of the same design.
Interesting, LeCun is not the only one saying this. Former Infosys CEO Vishal Sikka and his son published a paper, clearly stating that LLMs can not perform complex tasks and it can only perform a certain number of computations per response, which is fixed by the model's architecture.
If a task requires more computation than that ceiling, the model will either fail or hallucinate. This isn't a maybe. It's baked into the math of how these systems work, the paper said. Firstpost has also previously decoded why AI models hallucinate .
On top of it, LeCun was also particularly critical of the emerging push toward autonomous or “agentic” AI systems, which are built to act and make decisions independently. He warned that these systems lack the ability to anticipate outcomes or understand cause and effect, a basic requirement for intelligent behaviour.
The renowned scientist’s broader concern lies in the narrow thinking that dominates AI development. By focusing on hype rather than substance, LeCun believes the industry risks building systems that appear clever but lack true understanding of the real world, making AGI more fantasy than future.
It is not the first time, LeCun has spoken ill about the AI movement. Since the day, he left Meta (November 2025), he has been criticising Silicon Valley for its single-minded approach to build AI models.
Yann LeCun has criticised what he calls a growing herd culture in the artificial intelligence industry, warning that the obsession with replicating large language models such as ChatGPT is stifling innovation. He believes these models have already reached their peak potential and that pouring more resources into them will not bring humanity closer to truly intelligent machines.
In a recent interview from Paris, LeCun said that tech companies are blindly following the same path, leaving little room for alternative, more inventive research directions. He suggested that firms in China might outpace the West by exploring newer, more experimental approaches to AI.
The trend is evident. As ever since ChatGPT’s explosive debut in 2022, competitors such as Google, Meta, Microsoft, and xAI have all rushed to launch their own chatbots.
For LeCun, this copycat strategy threatens to limit creativity, and could eventually hold the field back.
This triggers a debate: what is the future of AI?
Going by the recent chatter, tech experts have been pressing that the next-step of AI is Artificial General Intelligence (AGI). For context, AGI is a hypothetical form of artificial intelligence that can understand, learn, and apply knowledge across a wide range of tasks at a human level (or beyond).
Unlike today’s AI systems that excel at specific tasks, like generating text, recognising images, or playing chess, AGI would be capable of general reasoning and problem-solving across domains. It could learn new skills, adapt to unfamiliar situations, and perform any intellectual task a human can.
While xAI CEO Elon Musk, Meta CEO Mark Zuckerberg and OpenAI CEO Sam Altman are obsessed to reach the AGI level, Professor Yann LeCun, often called one of the ‘godfathers of AI’, has heavily criticised the move.
The future of AI is over-rated
At the World Economic Forum (WEF) in Davos, Professor Yann LeCun, dismissed the industry’s current obsession with artificial general intelligence (AGI) as misplaced and overstated. The Meta veteran argued that AGI has been given far too much importance and that the technology powering today’s breakthroughs is fundamentally limited.
LeCun cautioned that simply scaling up large language models (LLMs) such as ChatGPT will not lead to genuine intelligence. He said the industry is chasing a mirage by assuming that bigger data and more powerful computing will automatically produce human-like understanding.
What AI truly needs, he said, is a complete shift in paradigm, not another iteration of the same design.
Interesting, LeCun is not the only one saying this. Former Infosys CEO Vishal Sikka and his son published a paper, clearly stating that LLMs can not perform complex tasks and it can only perform a certain number of computations per response, which is fixed by the model's architecture.
If a task requires more computation than that ceiling, the model will either fail or hallucinate. This isn't a maybe. It's baked into the math of how these systems work, the paper said. Firstpost has also previously decoded why AI models hallucinate .
On top of it, LeCun was also particularly critical of the emerging push toward autonomous or “agentic” AI systems, which are built to act and make decisions independently. He warned that these systems lack the ability to anticipate outcomes or understand cause and effect, a basic requirement for intelligent behaviour.
The renowned scientist’s broader concern lies in the narrow thinking that dominates AI development. By focusing on hype rather than substance, LeCun believes the industry risks building systems that appear clever but lack true understanding of the real world, making AGI more fantasy than future.
The herd effect
It is not the first time, LeCun has spoken ill about the AI movement. Since the day, he left Meta (November 2025), he has been criticising Silicon Valley for its single-minded approach to build AI models.
Yann LeCun has criticised what he calls a growing herd culture in the artificial intelligence industry, warning that the obsession with replicating large language models such as ChatGPT is stifling innovation. He believes these models have already reached their peak potential and that pouring more resources into them will not bring humanity closer to truly intelligent machines.
In a recent interview from Paris, LeCun said that tech companies are blindly following the same path, leaving little room for alternative, more inventive research directions. He suggested that firms in China might outpace the West by exploring newer, more experimental approaches to AI.
The trend is evident. As ever since ChatGPT’s explosive debut in 2022, competitors such as Google, Meta, Microsoft, and xAI have all rushed to launch their own chatbots.
For LeCun, this copycat strategy threatens to limit creativity, and could eventually hold the field back.














