What is the story about?
Meta’s ambitions in artificial intelligence have yet to deliver the kind of industry-leading breakthroughs it has long promised.
To describe its progress as underwhelming would be an understatement, especially given the scale of its investments. Yet the company remains firmly committed to the space, reportedly planning to pour over $600 billion into AI development.
Now, it appears to be pivoting strategy, betting that broader access rather than raw model dominance could define the next phase of the AI race.
Meta is preparing to release a new generation of AI models, marking the first major rollout under the leadership of Alexandr Wang, founder of Scale AI, which Meta acquired in a bid to strengthen its data and training capabilities, reports Axios.
These upcoming models are expected to adopt a more open approach, with the company offering them under licensing frameworks that resemble open-source distribution, albeit with certain restrictions.
Unlike competitors that rely heavily on closed, subscription-based ecosystems, Meta’s strategy leans towards accessibility. While some components of the models may remain proprietary for safety and compliance reasons, the broader aim is to allow developers and enterprises to build on top of its systems. This hybrid approach could lower entry barriers for companies that lack the resources to train large-scale models independently.
The rationale is grounded in industry trends. Training frontier AI models is becoming prohibitively expensive, prompting many firms to build upon existing open models rather than starting from scratch. Meta is positioning itself to become the primary supplier of such foundational technology, potentially capturing a wide developer base even if its models are not the most powerful on the market.
Despite the strategic shift, Meta’s track record in AI remains uneven. Its previous LLaMA models, though marketed as open, faced criticism for restrictive licensing terms and failed to achieve widespread adoption. The release of LLaMA 4, in particular, did not meet performance expectations and struggled to compete with leading models from rivals.
Internally, the company has undergone repeated restructuring efforts and invested heavily in talent acquisition, including offering substantial compensation packages to attract top researchers. However, these moves have yet to translate into consistent technological leadership. A planned model launch was recently delayed amid concerns about underperformance, highlighting ongoing challenges in execution.
There have also been indications of internal friction, with reports suggesting disagreements between senior leadership figures over the direction and readiness of Meta’s AI initiatives. With Wang now playing a central role in the upcoming releases, the stakes are high. Success could validate Meta’s open-access strategy, while failure may further erode confidence in its ability to compete at the cutting edge.
Ultimately, Meta’s gamble is clear: if it cannot lead on capability alone, it may still shape the market by democratising access. Whether that proves sufficient in an increasingly competitive AI landscape remains to be seen.
To describe its progress as underwhelming would be an understatement, especially given the scale of its investments. Yet the company remains firmly committed to the space, reportedly planning to pour over $600 billion into AI development.
Now, it appears to be pivoting strategy, betting that broader access rather than raw model dominance could define the next phase of the AI race.
Meta’s new open-source AI model
Meta is preparing to release a new generation of AI models, marking the first major rollout under the leadership of Alexandr Wang, founder of Scale AI, which Meta acquired in a bid to strengthen its data and training capabilities, reports Axios.
These upcoming models are expected to adopt a more open approach, with the company offering them under licensing frameworks that resemble open-source distribution, albeit with certain restrictions.
Unlike competitors that rely heavily on closed, subscription-based ecosystems, Meta’s strategy leans towards accessibility. While some components of the models may remain proprietary for safety and compliance reasons, the broader aim is to allow developers and enterprises to build on top of its systems. This hybrid approach could lower entry barriers for companies that lack the resources to train large-scale models independently.
The rationale is grounded in industry trends. Training frontier AI models is becoming prohibitively expensive, prompting many firms to build upon existing open models rather than starting from scratch. Meta is positioning itself to become the primary supplier of such foundational technology, potentially capturing a wide developer base even if its models are not the most powerful on the market.
Meta’s attempt in the AI race
Despite the strategic shift, Meta’s track record in AI remains uneven. Its previous LLaMA models, though marketed as open, faced criticism for restrictive licensing terms and failed to achieve widespread adoption. The release of LLaMA 4, in particular, did not meet performance expectations and struggled to compete with leading models from rivals.
Internally, the company has undergone repeated restructuring efforts and invested heavily in talent acquisition, including offering substantial compensation packages to attract top researchers. However, these moves have yet to translate into consistent technological leadership. A planned model launch was recently delayed amid concerns about underperformance, highlighting ongoing challenges in execution.
There have also been indications of internal friction, with reports suggesting disagreements between senior leadership figures over the direction and readiness of Meta’s AI initiatives. With Wang now playing a central role in the upcoming releases, the stakes are high. Success could validate Meta’s open-access strategy, while failure may further erode confidence in its ability to compete at the cutting edge.
Ultimately, Meta’s gamble is clear: if it cannot lead on capability alone, it may still shape the market by democratising access. Whether that proves sufficient in an increasingly competitive AI landscape remains to be seen.














