Skip to main content

AEAI Provides Expert Evidence on AI in Government to Parliamentary Committee

Our team presented findings on the role of artificial intelligence in public sector decision-making, covering both the opportunities and the structural risks that policymakers must navigate.


The Invitation

In January 2025, the House of Commons Science, Innovation and Technology Committee invited Applied Economics to provide expert evidence on the deployment of artificial intelligence in UK government departments. The inquiry focused on a central question: how should the public sector adopt AI tools without compromising accountability, equity, or the quality of decisions that affect millions of citizens?

Our evidence drew on two years of research into algorithmic decision-making in public services, supported by data from our Administrative Data Analytics Platform.

What We Presented

Our testimony covered three areas where AI intersects with government operations in ways that demand careful attention.

Algorithmic Bias in Welfare Administration

We presented evidence showing that machine learning models used in benefits eligibility assessments exhibit measurable demographic bias. Our analysis of over 2 million DWP decisions between 2019 and 2024 found that automated screening tools were 23% more likely to flag applications from claimants in the lowest income decile for manual review, even after controlling for all stated eligibility criteria.

The problem is not that algorithms are biased by design. The problem is that they are trained on historically biased decisions, and without rigorous audit frameworks, these patterns become invisible and self-reinforcing.

Data Infrastructure Gaps

UK government departments operate over 1,200 distinct data systems, many of which cannot communicate with each other. This fragmentation means that the training data available for AI models is incomplete, inconsistent, and often outdated. We argued that no amount of algorithmic sophistication can compensate for poor data infrastructure.

The Case for Algorithmic Auditing

We proposed a framework for mandatory algorithmic impact assessments, modelled on environmental impact assessments. The framework includes pre-deployment bias testing, ongoing performance monitoring disaggregated by protected characteristics, and public reporting requirements.

Committee Response

The committee expressed particular interest in our data on bias in welfare administration, and several members asked follow-up questions about the feasibility of mandatory auditing. The committee chair noted that our evidence was "among the most specific and data-grounded submissions the committee has received."

What This Means

Parliamentary inquiries shape legislation. The evidence we provided will inform the committee's forthcoming report on AI governance, expected in spring 2025. We will continue to engage with policymakers to ensure that data-driven insights inform the regulatory framework for AI in government.

The full written evidence submission is available on the UK Parliament website.

Stay Informed

Stay informed.

Receive our latest research and data insights.