What is the story about?
India’s healthcare AI ambitions are accelerating at remarkable speed. From Ayushman Bharat Health Account (Abha)-linked patient records and connected diagnostic devices to AI-powered screening systems, the country is quietly building one of the world’s largest digital health ecosystems.
But beneath the optimism surrounding artificial intelligence in healthcare lies a more unsettling concern: what happens when deeply personal biological information becomes vulnerable at national scale?
For Ashissh Raichura, the founder and CEO of Scanbo India, the answer is health data cannot be treated like ordinary digital information because, unlike passwords or bank credentials, it is permanent.
“You can replace a bank card. You cannot replace your biology,” he says.
That permanence, he argues, changes the entire conversation around healthcare AI, data governance and digital trust. A breach involving medical histories, diagnostic records, biometric information or genetic markers is not merely a short-term cybersecurity issue. It could evolve into a long-term familial and generational risk.
India’s digital public health infrastructure has expanded rapidly in recent years through initiatives such as the Ayushman Bharat Digital Mission. Together with hospital systems, wearable devices and connected diagnostics, the ecosystem is generating vast volumes of health information linked to individual citizens.
As of March 2026, more than 86 crore Abha had reportedly been created, while over 90 crore health records had been linked to Abha accounts.
For Raichura, the scale of this transformation creates both extraordinary potential and significant responsibility.
Health records do not simply capture routine information. They reveal intimate biological patterns, chronic conditions, hereditary traits and behavioural histories that can expose deeply personal details about individuals and, in some cases, entire families.
Unlike social media data or browsing histories, biological data cannot truly be reset after exposure. Once compromised, its implications may persist indefinitely.
Raichura believes India must therefore view health information as a sovereign, citizen-owned asset rather than raw material for commercial AI systems.
He argues that citizens should retain meaningful control over how their data is collected, shared and monetised. Transparent consent frameworks, accountability mechanisms and stronger ownership protections, he says, will become essential as AI adoption increases across healthcare systems.
“The challenge is not whether health data should power innovation,” he says. “It absolutely should. The question is whether the ecosystem is built around dignity and ownership or around silent extraction.”
Alongside concerns about privacy and governance, Raichura also believes India must avoid designing healthcare AI systems that weaken human-centred care.
India already depends heavily on frontline healthcare workers, including more than 10.29 lakh Accredited Social Health Activists (Ashas) and nearly 89,000 Auxiliary Nurse Midwives (ANMs), many of whom serve as the first point of contact in underserved communities.
For him, AI should strengthen these healthcare networks rather than attempt to replace them.
“The human provider does not become less relevant when the technology is designed well,” he says. “She becomes more capable.”
In practical terms, this means AI systems should function as decision-support tools that help clinicians detect risks earlier, improve diagnostic consistency and extend specialist expertise into remote regions.
But Raichura warns against building opaque systems where algorithms operate beyond human understanding or accountability.
“Technology must serve the provider and the patient,” he says. “It should not turn care into a black box.”
He argues that patients do not simply seek accurate outputs from healthcare systems. They also need trust, reassurance and contextual understanding, elements that remain deeply human.
Point-of-care diagnostic systems, in his view, could play a major role in improving healthcare delivery by generating cleaner and more structured clinical data during patient interactions. Better-quality inputs would allow AI systems to support earlier interventions while improving reliability across care settings.
However, the success of healthcare AI will ultimately depend not only on technological sophistication, but also on whether citizens trust the systems being built around them.
For India, the future of healthcare AI may therefore hinge on a delicate balance: harnessing the power of intelligent systems without compromising human judgement, patient dignity or the permanence of biological privacy.
But beneath the optimism surrounding artificial intelligence in healthcare lies a more unsettling concern: what happens when deeply personal biological information becomes vulnerable at national scale?
For Ashissh Raichura, the founder and CEO of Scanbo India, the answer is health data cannot be treated like ordinary digital information because, unlike passwords or bank credentials, it is permanent.
“You can replace a bank card. You cannot replace your biology,” he says.
That permanence, he argues, changes the entire conversation around healthcare AI, data governance and digital trust. A breach involving medical histories, diagnostic records, biometric information or genetic markers is not merely a short-term cybersecurity issue. It could evolve into a long-term familial and generational risk.
India’s healthcare AI expansion is creating an unprecedented data ecosystem
India’s digital public health infrastructure has expanded rapidly in recent years through initiatives such as the Ayushman Bharat Digital Mission. Together with hospital systems, wearable devices and connected diagnostics, the ecosystem is generating vast volumes of health information linked to individual citizens.
As of March 2026, more than 86 crore Abha had reportedly been created, while over 90 crore health records had been linked to Abha accounts.
For Raichura, the scale of this transformation creates both extraordinary potential and significant responsibility.
Health records do not simply capture routine information. They reveal intimate biological patterns, chronic conditions, hereditary traits and behavioural histories that can expose deeply personal details about individuals and, in some cases, entire families.
Unlike social media data or browsing histories, biological data cannot truly be reset after exposure. Once compromised, its implications may persist indefinitely.
Raichura believes India must therefore view health information as a sovereign, citizen-owned asset rather than raw material for commercial AI systems.
He argues that citizens should retain meaningful control over how their data is collected, shared and monetised. Transparent consent frameworks, accountability mechanisms and stronger ownership protections, he says, will become essential as AI adoption increases across healthcare systems.
“The challenge is not whether health data should power innovation,” he says. “It absolutely should. The question is whether the ecosystem is built around dignity and ownership or around silent extraction.”
Healthcare AI should support clinicians, not become a ‘black box’
Alongside concerns about privacy and governance, Raichura also believes India must avoid designing healthcare AI systems that weaken human-centred care.
India already depends heavily on frontline healthcare workers, including more than 10.29 lakh Accredited Social Health Activists (Ashas) and nearly 89,000 Auxiliary Nurse Midwives (ANMs), many of whom serve as the first point of contact in underserved communities.
For him, AI should strengthen these healthcare networks rather than attempt to replace them.
“The human provider does not become less relevant when the technology is designed well,” he says. “She becomes more capable.”
In practical terms, this means AI systems should function as decision-support tools that help clinicians detect risks earlier, improve diagnostic consistency and extend specialist expertise into remote regions.
But Raichura warns against building opaque systems where algorithms operate beyond human understanding or accountability.
“Technology must serve the provider and the patient,” he says. “It should not turn care into a black box.”
He argues that patients do not simply seek accurate outputs from healthcare systems. They also need trust, reassurance and contextual understanding, elements that remain deeply human.
Point-of-care diagnostic systems, in his view, could play a major role in improving healthcare delivery by generating cleaner and more structured clinical data during patient interactions. Better-quality inputs would allow AI systems to support earlier interventions while improving reliability across care settings.
However, the success of healthcare AI will ultimately depend not only on technological sophistication, but also on whether citizens trust the systems being built around them.
For India, the future of healthcare AI may therefore hinge on a delicate balance: harnessing the power of intelligent systems without compromising human judgement, patient dignity or the permanence of biological privacy.















