As artificial intelligence systems become increasingly embedded in healthcare, education, financial services, and public infrastructure, experts are warning that governance and accountability mechanisms
are struggling to keep pace with rapid technological deployment.
Recent industry assessments and academic research indicate that gaps in transparency, evaluation standards, and human oversight may pose growing risks to public trust and regulatory compliance, particularly in high-impact sectors.
“Many AI systems perform well in controlled environments but behave unpredictably in real-world settings,” said Pandya Kartikkumar Ashokbhai, an applied artificial intelligence researcher based in the United States. “Without continuous monitoring and explainability, institutions may find it difficult to rely on these technologies for critical decision-making.”
Global benchmarking reports released in early 2026 found that fewer than one in five major technology companies fully comply with recognized ethical and accountability frameworks. Several organizations continue to lack standardized procedures for documenting risk assessments, system limitations, and social impact evaluations.
Researchers have also observed that advanced AI models often display hidden failure patterns when deployed outside laboratory conditions. Safety evaluations have highlighted issues such as unreliable generalization, domain shifts, and limited interpretability, particularly in healthcare diagnostics, educational platforms, and public service applications.
“These challenges extend beyond technical accuracy,” Pandya explained. “They involve whether systems can be trusted under uncertainty and whether users can understand how automated decisions are generated.”
Regulatory Response and Global Standards
In response to rising concerns, governments and regulatory authorities have begun strengthening oversight mechanisms. In the United States and Europe, developers of large-scale AI systems are now required to publish safety evaluations, testing methodologies, and risk mitigation strategies. International initiatives are also working to establish binding standards focused on transparency, non-discrimination, and human rights protections.
India, too, has been increasing its focus on responsible AI development, with policymakers emphasizing the importance of ethical frameworks and public accountability as digital adoption accelerates across sectors.
Research Focus on Explainability and Reliability
Pandya’s research centers on developing explainable and robust artificial intelligence systems for real-world applications. His recent studies examine human-centered AI design, multimodal explainability, and causal modeling for healthcare and environmental analysis.
These projects emphasize the importance of interpretability and operational reliability, particularly in sensitive domains such as medical diagnostics, accessibility technologies, and public decision-support systems.
He has also contributed to academic quality assurance through peer review and conference evaluation activities, assessing research in areas including medical imaging, cybersecurity, natural language processing, and computer vision.
“Peer review remains one of the most effective safeguards in rapidly evolving research fields,” he said. “It helps ensure that innovations are not only novel but also reproducible and ethically grounded.”
Industry Adoption and Governance Challenges
Industry surveys indicate that while organizations continue to invest heavily in autonomous and data-driven systems, relatively few have implemented comprehensive governance frameworks. Fewer than a quarter of enterprises surveyed reported having formal procedures for continuous monitoring, escalation protocols, and human-in-the-loop oversight.
Experts warn that this gap has implications for regulatory compliance, data security, and long-term public confidence in digital systems.
Pandya’s ongoing research submissions to international engineering and computing journals address these governance challenges through applied studies on explainable AI for accessibility, clinically robust diagnostic models, and decision-support systems designed for dynamic environments.
Building Trust in Emerging Technologies
In sectors such as healthcare, education, and public policy, specialists emphasize that transparency and accountability are becoming essential requirements rather than optional features.
“Stakeholders increasingly expect clear explanations of how automated systems function,” Pandya noted. “Trust depends on visibility, oversight, and shared responsibility.”
As artificial intelligence continues to influence economic and social systems globally, researchers and policymakers agree that governance structures must evolve alongside technological innovation. Without coordinated efforts to strengthen evaluation standards and regulatory alignment, experts caution that technological progress may outpace society’s ability to manage it responsibly.














