What is the story about?
The artificial intelligence (AI) needs to go beyond certain wealthy individuals' control and be universally accessible, and that can only happen with truly open-source models, not just models that have 'open' in their name but are otherwise closed, OpenUK CEO Amanda Brock has told Firstpost.
In an interview with Madhur Sharma, Brock discussed the democratisation of AI and India's place in the global AI movement.
Brock said that India’s long-term strength will be significantly enhanced through collaboration by advancing its own capabilities while also bringing other nations along in that journey.
As digital infrastructure has increasingly been seen as critical infrastructure, Brock said that a balanced mix of publicly and privately funded open-source infrastructure to ensure both innovation and accessibility and cited Sarvam AI as a model for such an approach.
Here is the full conversation — edited for clarity and brevity.
How important are open-source AI models for the democratization of AI? How can open-source AI models maximize the benefits of AI?
In a world where a handful of wealthy individuals currently control the destiny of AI, open source offers the potential for universal access. It empowers innovators by enabling them to iteratively build upon existing open-source innovations. However, this must be true open source —not a non-commercial proprietary model— to fully realise its potential. Furthermore, the open-source model provides access to all users, thereby democratising technology for both innovators and end users.
Most widely used AI systems —such as ChatGPT, Claude, and Gemini— are closed models controlled by a handful of large companies with massive compute power. How can open-source systems realistically compete with such tech giants? What is needed to prevent AI from becoming an oligopoly? Do you see a state-led regulatory framework as the way forward or should the road ahead be an international framework or self-regulation?
There is a significant amount of inappropriate scare-mongering around open source, often driven by companies whose business models rely on selling closed AI systems — even when their names suggest openness. This was particularly evident at the Impact Summit last week.
China’s strategy to focus on open source is nearly a decade old and, over time, has contributed to the development of some of today’s most well-known AI models, such as Kimi, Qwen, and DeepSeek, which are open source.
When open-source software first emerged, there was widespread concern among the uninformed about potential risks. Over time, however, perceptions shifted for a variety of reasons, and AI is likely to experience a similar turning point as understanding improves. Today, open-source software underpins much of our digital economy, providing a globally collaborative foundation that closed, proprietary software simply cannot match. More than 76 per cent of infrastructure software is open source, and over 92 per cent of all software depends on open source in some capacity.
There was notably less discussion in Delhi than in Paris regarding global governance. However, global collaboration will be critical —particularly among middle-power nations such as the UK and India. In this context, it was especially interesting to hear Yann LeCun advocate for the development of a globally collaborative LLM.
Do you believe governments should treat AI-related infrastructure —data centres, chips, electricity requirements, etc.— like public infrastructure such as roads and railways?
I believe we need a balanced mix of publicly and privately funded open-source infrastructure to ensure both innovation and accessibility. In India, Sarvam AI has received support through such a model. However, the next critical step will be to move beyond the closed, proprietary version launched during the Impact Summit and transition toward a truly open-weights framework.
You have previously argued that sovereignty is not isolation but sovereign capability in the context of AI. Where is the line between healthy technological sovereignty and regulatory fragmentation across jurisdictions that could slow global collaboration? How do you view the tussle between national control and global interoperability?
In 2026, we need a clear ontology to better understand the language we are using — particularly terms such as sovereignty, which can carry multiple and often conflicting meanings. In my view, technological sovereignty must be globally collaborative rather than isolationist in nature.
While sovereign capability is undeniably important, it should neither be overstated nor pursued in silos. Instead, it must be developed collaboratively and positioned appropriately within the AI stack. I was particularly struck by how Emmanuel Macron framed this issue, suggesting that the right approach might involve an LLM for France and an SLM for India: distinct solutions, each aligned with the specific needs of their respective domestic markets.
When it comes to the democratisation of AI, access to models —through open source— is just one factor. True democratisation requires skills, data, and infrastructure, including computing power. What does meaningful AI democratisation look like for the Global South, which lacks the resources of advanced countries like the United States or China?
Personally, when I refer to open source, I am not referring to data. As I mentioned earlier, we need a clearer ontology around these terms. I would distinguish explicitly between open source and open data — and what we urgently need are well-governed, responsibly managed open datasets.
While open source is fundamental to democratization, meaningful access must extend beyond code. It includes access to shared data commons, to open-source software tools that engineers can use to build safe and trustworthy AI systems, and to compute infrastructure. All three pillars —code, data, and compute— must be addressed together.
One insight I took away from visiting open-source communities in China last year was their strong focus on efficiency — an almost obsessive commitment to leanness in order to reduce compute requirements. This discipline will only grow in importance.
If we continue building and deploying bloated AI systems that consume vast amounts of compute in fossil-fuel-powered data centres, the real risk will not be to our jobs, but to our planet.
India has positioned itself as a leader of the Global South in recent years. Where does India stand when it comes to AI? What specific steps should it take over the next five years if it wants to emerge as a responsible AI leader with a focus on open source?
The Stamford Index indicates that India is making steady progress among middle-tier nations and, in terms of open AI, was ranked 10th at the time of our most recent research. The UK is slightly ahead, but both countries still have significant work to do.
What has become increasingly apparent is that there is now greater scope than ever for cooperation to build competitive capability. And this is where open source presents a meaningful opportunity. The UK is already India’s second-largest collaboration partner in openness, behind the US, which is particularly notable given the relative size of our two economies.
A follow-up: Where do you think India currently stands in the global AI landscape? Do you believe India’s ambitions to be an AI leader in the Global South are realistic?
I believe India is emerging as a key leader within the Global South. As we often say in the open-source community, a rising tide lifts all boats. India’s long-term strength will be significantly enhanced through collaboration—by advancing its own capabilities while also bringing other nations along in that journey.
In an interview with Madhur Sharma, Brock discussed the democratisation of AI and India's place in the global AI movement.
Brock said that India’s long-term strength will be significantly enhanced through collaboration by advancing its own capabilities while also bringing other nations along in that journey.
As digital infrastructure has increasingly been seen as critical infrastructure, Brock said that a balanced mix of publicly and privately funded open-source infrastructure to ensure both innovation and accessibility and cited Sarvam AI as a model for such an approach.
Here is the full conversation — edited for clarity and brevity.
How important are open-source AI models for the democratization of AI? How can open-source AI models maximize the benefits of AI?
In a world where a handful of wealthy individuals currently control the destiny of AI, open source offers the potential for universal access. It empowers innovators by enabling them to iteratively build upon existing open-source innovations. However, this must be true open source —not a non-commercial proprietary model— to fully realise its potential. Furthermore, the open-source model provides access to all users, thereby democratising technology for both innovators and end users.
Most widely used AI systems —such as ChatGPT, Claude, and Gemini— are closed models controlled by a handful of large companies with massive compute power. How can open-source systems realistically compete with such tech giants? What is needed to prevent AI from becoming an oligopoly? Do you see a state-led regulatory framework as the way forward or should the road ahead be an international framework or self-regulation?
There is a significant amount of inappropriate scare-mongering around open source, often driven by companies whose business models rely on selling closed AI systems — even when their names suggest openness. This was particularly evident at the Impact Summit last week.
China’s strategy to focus on open source is nearly a decade old and, over time, has contributed to the development of some of today’s most well-known AI models, such as Kimi, Qwen, and DeepSeek, which are open source.
When open-source software first emerged, there was widespread concern among the uninformed about potential risks. Over time, however, perceptions shifted for a variety of reasons, and AI is likely to experience a similar turning point as understanding improves. Today, open-source software underpins much of our digital economy, providing a globally collaborative foundation that closed, proprietary software simply cannot match. More than 76 per cent of infrastructure software is open source, and over 92 per cent of all software depends on open source in some capacity.
There was notably less discussion in Delhi than in Paris regarding global governance. However, global collaboration will be critical —particularly among middle-power nations such as the UK and India. In this context, it was especially interesting to hear Yann LeCun advocate for the development of a globally collaborative LLM.
Do you believe governments should treat AI-related infrastructure —data centres, chips, electricity requirements, etc.— like public infrastructure such as roads and railways?
I believe we need a balanced mix of publicly and privately funded open-source infrastructure to ensure both innovation and accessibility. In India, Sarvam AI has received support through such a model. However, the next critical step will be to move beyond the closed, proprietary version launched during the Impact Summit and transition toward a truly open-weights framework.
You have previously argued that sovereignty is not isolation but sovereign capability in the context of AI. Where is the line between healthy technological sovereignty and regulatory fragmentation across jurisdictions that could slow global collaboration? How do you view the tussle between national control and global interoperability?
In 2026, we need a clear ontology to better understand the language we are using — particularly terms such as sovereignty, which can carry multiple and often conflicting meanings. In my view, technological sovereignty must be globally collaborative rather than isolationist in nature.
While sovereign capability is undeniably important, it should neither be overstated nor pursued in silos. Instead, it must be developed collaboratively and positioned appropriately within the AI stack. I was particularly struck by how Emmanuel Macron framed this issue, suggesting that the right approach might involve an LLM for France and an SLM for India: distinct solutions, each aligned with the specific needs of their respective domestic markets.
When it comes to the democratisation of AI, access to models —through open source— is just one factor. True democratisation requires skills, data, and infrastructure, including computing power. What does meaningful AI democratisation look like for the Global South, which lacks the resources of advanced countries like the United States or China?
Personally, when I refer to open source, I am not referring to data. As I mentioned earlier, we need a clearer ontology around these terms. I would distinguish explicitly between open source and open data — and what we urgently need are well-governed, responsibly managed open datasets.
While open source is fundamental to democratization, meaningful access must extend beyond code. It includes access to shared data commons, to open-source software tools that engineers can use to build safe and trustworthy AI systems, and to compute infrastructure. All three pillars —code, data, and compute— must be addressed together.
One insight I took away from visiting open-source communities in China last year was their strong focus on efficiency — an almost obsessive commitment to leanness in order to reduce compute requirements. This discipline will only grow in importance.
If we continue building and deploying bloated AI systems that consume vast amounts of compute in fossil-fuel-powered data centres, the real risk will not be to our jobs, but to our planet.
India has positioned itself as a leader of the Global South in recent years. Where does India stand when it comes to AI? What specific steps should it take over the next five years if it wants to emerge as a responsible AI leader with a focus on open source?
The Stamford Index indicates that India is making steady progress among middle-tier nations and, in terms of open AI, was ranked 10th at the time of our most recent research. The UK is slightly ahead, but both countries still have significant work to do.
What has become increasingly apparent is that there is now greater scope than ever for cooperation to build competitive capability. And this is where open source presents a meaningful opportunity. The UK is already India’s second-largest collaboration partner in openness, behind the US, which is particularly notable given the relative size of our two economies.
A follow-up: Where do you think India currently stands in the global AI landscape? Do you believe India’s ambitions to be an AI leader in the Global South are realistic?
I believe India is emerging as a key leader within the Global South. As we often say in the open-source community, a rising tide lifts all boats. India’s long-term strength will be significantly enhanced through collaboration—by advancing its own capabilities while also bringing other nations along in that journey.














