What's Happening?
The rise of artificial intelligence in government services has prompted calls for updated transparency and accountability measures. Former MEP and AI policy expert Marietje Schaake advocates for laws that ensure technology companies executing tasks for the government maintain transparency and accountability. This comes as the government invests heavily in AI, with over £500 million allocated to AI projects this year. Concerns have been raised about the opacity of AI systems, particularly those using neural networks, which operate on implicit rules derived from training data. The Public Law project's Tracking Automated Governance database highlights a lack of declared algorithmic decision-making in government, while the Bureau of Investigative Journalism notes difficulties in obtaining information on government data system procurement.
Why It's Important?
The integration of AI into public services has significant implications for transparency and accountability. As AI systems become more complex, the risk of gaps in understanding and oversight increases. This could lead to potential injustices, as seen in the Post Office Horizon scandal, where the supposed infallibility of an IT system led to a major miscarriage of justice. Extending Freedom of Information obligations to contractors providing AI systems for government decision-making could prevent similar issues. Ensuring transparency in AI use is crucial for maintaining public trust and preventing abuses of power by technology companies.
What's Next?
The government has committed to strengthening transparency requirements for AI systems used in public services, as outlined in their Make Work Pay policy paper. However, progress has been slow, with no significant movement on these proposals nearly a year later. The call for transparency is likely to gain momentum as more generative AI applications are procured in the public sector. Stakeholders, including civil society groups and policy experts, may push for legislative changes to ensure that AI systems are subject to the same transparency standards as traditional government processes.
Beyond the Headlines
The ethical implications of AI use in government decision-making are profound. As AI systems become more autonomous, questions arise about accountability and the potential for bias in algorithmic decisions. The power imbalance between public authorities and tech companies further complicates the issue, as companies like Microsoft wield significant influence. Addressing these challenges requires a concerted effort to establish evaluative capabilities and transparency rights, ensuring that AI systems serve the public interest without compromising ethical standards.