What is the story about?
Artificial General Intelligence, or AGI, has long occupied a near-mythical place in the world of artificial intelligence, spoken about as the ultimate breakthrough that always seems just over the horizon.
It promises machines that do not just process information, but think, reason and adapt across tasks in ways that mirror human intelligence. For years, it has been framed as the endgame for companies like OpenAI and xAI, both racing to push the boundaries of what machines can do.
But what if that finish line is not as distant as it seems? A growing number of voices in the industry are beginning to question whether AGI is still a future milestone or something that has, in some form, already arrived. The answer, it turns out, may depend less on technological capability and more on how one chooses to define intelligence itself.
Nvidia CEO Jensen Huang has added a provocative twist to the AGI conversation. Speaking on the Lex Fridman Podcast, he responded to a definition posed by host Lex Fridman, who suggested that AGI could be understood as an AI capable of creating and running a billion-dollar company.
Huang’s answer was strikingly direct. By that standard, he argued, AGI is not a distant ambition but a present reality. His stance, however, hinges on a narrower interpretation of the concept rather than the broader, more ambitious vision of machines fully replicating human cognition.
To support his view, Huang pointed to emerging AI agent systems such as OpenClaw, which allow users to run autonomous AI agents locally. These systems, he suggested, could theoretically build applications or services that scale rapidly, attracting massive user bases and generating significant revenue in a short span of time.
He likened this possibility to the early days of the internet boom, when companies could achieve explosive growth almost overnight. Yet, he also acknowledged the volatility of such success, noting that many of those early digital ventures faded just as quickly as they rose.
Huang’s perspective is far from the universally accepted one. Within the AI community, the definition of AGI remains deeply contested, and so does the timeline for its arrival.
Demis Hassabis has taken a more cautious view, arguing that current systems still fall short in key areas such as long-term planning, reasoning consistency and continuous learning. In his assessment, truly general intelligence is still several years away.
On the other end of the spectrum, Elon Musk has repeatedly suggested that AGI could emerge much sooner, possibly within the next few years.
Even Huang himself draws a clear boundary. While he sees potential in AI-driven systems creating short-term economic value, he remains sceptical about their ability to build and sustain complex organisations. Replicating a company like Nvidia, with its long-term strategy, innovation cycles and organisational depth, is, in his view, well beyond the reach of current AI.
This divergence highlights a fundamental issue: AGI is not a single, universally agreed-upon benchmark. It is a moving target shaped by differing expectations, technical interpretations and commercial ambitions.
While the AGI debate continues, progress in AI systems is becoming increasingly tangible through agentic capabilities. Tools developed by Anthropic, particularly its Claude models, are a case in point.
These systems are designed not just to respond to prompts but to carry out multi-step tasks, reason through complex problems and assist in workflows that previously required sustained human input. From writing and debugging code to generating detailed analyses, Claude is part of a broader shift towards AI that can act with a degree of autonomy.
Such developments may not fully meet the traditional definition of AGI, but they are steadily blurring the line between narrow AI and more general-purpose intelligence. As agentic systems become more capable, the question is no longer simply whether AGI has arrived, but whether the industry’s definition of it is evolving in real time.
It promises machines that do not just process information, but think, reason and adapt across tasks in ways that mirror human intelligence. For years, it has been framed as the endgame for companies like OpenAI and xAI, both racing to push the boundaries of what machines can do.
But what if that finish line is not as distant as it seems? A growing number of voices in the industry are beginning to question whether AGI is still a future milestone or something that has, in some form, already arrived. The answer, it turns out, may depend less on technological capability and more on how one chooses to define intelligence itself.
Is AGI really here? Nvidia CEO explains
Nvidia CEO Jensen Huang has added a provocative twist to the AGI conversation. Speaking on the Lex Fridman Podcast, he responded to a definition posed by host Lex Fridman, who suggested that AGI could be understood as an AI capable of creating and running a billion-dollar company.
Huang’s answer was strikingly direct. By that standard, he argued, AGI is not a distant ambition but a present reality. His stance, however, hinges on a narrower interpretation of the concept rather than the broader, more ambitious vision of machines fully replicating human cognition.
To support his view, Huang pointed to emerging AI agent systems such as OpenClaw, which allow users to run autonomous AI agents locally. These systems, he suggested, could theoretically build applications or services that scale rapidly, attracting massive user bases and generating significant revenue in a short span of time.
He likened this possibility to the early days of the internet boom, when companies could achieve explosive growth almost overnight. Yet, he also acknowledged the volatility of such success, noting that many of those early digital ventures faded just as quickly as they rose.
The convoluted AGI debate
Huang’s perspective is far from the universally accepted one. Within the AI community, the definition of AGI remains deeply contested, and so does the timeline for its arrival.
Demis Hassabis has taken a more cautious view, arguing that current systems still fall short in key areas such as long-term planning, reasoning consistency and continuous learning. In his assessment, truly general intelligence is still several years away.
On the other end of the spectrum, Elon Musk has repeatedly suggested that AGI could emerge much sooner, possibly within the next few years.
Even Huang himself draws a clear boundary. While he sees potential in AI-driven systems creating short-term economic value, he remains sceptical about their ability to build and sustain complex organisations. Replicating a company like Nvidia, with its long-term strategy, innovation cycles and organisational depth, is, in his view, well beyond the reach of current AI.
This divergence highlights a fundamental issue: AGI is not a single, universally agreed-upon benchmark. It is a moving target shaped by differing expectations, technical interpretations and commercial ambitions.
Anthropic Claude AI agentic features
While the AGI debate continues, progress in AI systems is becoming increasingly tangible through agentic capabilities. Tools developed by Anthropic, particularly its Claude models, are a case in point.
These systems are designed not just to respond to prompts but to carry out multi-step tasks, reason through complex problems and assist in workflows that previously required sustained human input. From writing and debugging code to generating detailed analyses, Claude is part of a broader shift towards AI that can act with a degree of autonomy.
Such developments may not fully meet the traditional definition of AGI, but they are steadily blurring the line between narrow AI and more general-purpose intelligence. As agentic systems become more capable, the question is no longer simply whether AGI has arrived, but whether the industry’s definition of it is evolving in real time.














