Everywhere, people grumble about the government: that politicians care only about themselves; that bureaucrats gum up the system; that taxpayers get fleeced. Even in wealthy countries, nearly two in three people are dissatisfied with how democracy is working.
Headlines focus on politics, but a deeper problem could be public services that are overwhelmed, in contrast to a technological era that keeps accelerating. The real danger, says Alexander Iosad, director of government innovation at the Tony Blair Institute, would be to change nothing.
AI Policy Perspectives visited Iosad, lead author of “Governing in the Age of AI,” to hear his vision of how technology might remedy governmental woes.
—Tom Rachman, AI Policy Perspectives
[Interview edited and condensed]
Tom: Aren’t people always bemoaning governments? Or is something broken in a different way today?
Alexander Iosad: People complain about public services being too bureaucratic, too standardized, not targeted enough. All of those things are true because the system was built in another era, when there was no way to operate differently. But over time, we have faced the Baumol cost-disease problem: things that we produce in the physical world get cheaper, but the cost of services keeps rising because of inflation and higher labour costs. As public-service costs grow, we have this conflict that has brewed over decades: Should government do less? or Should government tax more? But technologies have reached a level of maturity to break this cycle. We can have governments that aren’t dependent on just hiring more people to do more of the same, but can be cheaper, and more effective, and operate at a national scale all at the same time.
Tom: You’re proposing AI as a lever for state renewal. What philosophical change would governments need to achieve that?
Alexander: The first is for governments to realize they can’t continue with marginal tweaks to systems that don’t work. Public services are under such strain that people are looking for the status quo to be challenged. That’s why they’re open to populists. Instead, governments need to embrace the radicalism inherent in what we call disruptive delivery. And this is where AI is a big part of the solution.
WHAT AI FOR GOVERNMENT COULD LOOK LIKE
Tom: The public sector has a lower tolerance for error than the private sector—damage from an incorrect decision about public health could be far worse than a mistake in a business plan. How do you convince political leaders to embrace disruption when the cost of failure could be so high?
Alexander: Because the cost of inaction is much higher. If you do nothing, the system degrades. And the cost is borne by the citizen. If you have a healthcare system that is bursting at the seams; if you have an education system where the disadvantage gap between students on free school meals and their peers is 19 months and trending above pre-Covid levels—those are real problems experienced by real people. Not recognizing that you can actually change isn’t just a political cost. It is a cost to that citizen, which has downstream consequences for both the system and the politician.
Tom: How might citizens experience AI improvements?
Alexander: By way of example, we can have an education system that is genuinely personalized. We know that personalized learning is more engaging and produces better learning outcomes. We can also have a system that identifies where students have learning gaps, and can inform teachers on what to address. Imagine a school where there’s an emerging gap in mathematics in Year Seven. At the moment, the only way you spot this is when the students take their exams four years later. By then, it’s too late. You might say, “Okay, we now need to focus on maths at that school.” But you’ve had a cohort of students come through, and suffer from this failure. With data and AI, you can spot the gap as it emerges.
Furthermore, we currently have a model of schooling that depends on having access to a person: the teacher. Maybe a parent has a question, and must email the teacher, then wait. If we have a safety net of an AI system—say, a tutor that’s always available, and that is verified to be accurate enough, and that is adapted to the national standards—that parent or student may ask a question at 7:30pm on a Saturday, and doesn’t have to wait to find the teacher. More broadly, you’re creating a different experience of interacting with public services, where they are there for you when you need them.
Tom: To some educators, that picture of teaching will seem like techno-solutionism that overlooks the human role in learning.
Alexander: I would class myself as a tech optimist rather than a tech solutionist. Techno-solutionism means high trust in technology—but low trust in people. Tech optimism is high trust in both. It’s not about replacing the human connection. It’s about recognizing the constraints that a sole dependence on humans to deliver public services introduces into the system, and the gaps that it creates. An ideal system is one that fills those gaps with technology.
Tom: What about other sectors, such as public health?
Alexander: People ask for a transformative AI use-case in healthcare, but it won’t be one big thing; it’ll be 1,000 little things that, in aggregate, completely change your experience. People are already wearing digital rings and smartwatches that measure their pulse and can tell if they are at risk of particular health problems. So at an individual level, this is starting to work already. It becomes really powerful once you connect this to population-level health. In a more personal way, if your doctor has an ambient AI note-taking system, your medical experience transforms. Today, you sit in front of them, they type a lot, and occasionally look at you. But you can have a system where they are fully present and listening, and don’t have to worry about capturing the full picture of what you’re telling them. As we expand outwards, there is the pharmaceutical revolution from AI too, with less cost and more speed of development, and medicines that can be adapted to your body.
Tom: What about government’s role in managing crime?
Alexander: One example is facial recognition, which is contentious for good reasons. People don’t like the idea of their faces being scanned as they walk down the street. “What if there’s a mistake? What if I’m apprehended wrongly?” But in the UK, this technology has achieved very high levels of accuracy now, and does not lead to wrongful arrests. There’s data recently out of the London Metropolitan Police, which uses facial recognition extensively, where the error rate was 10 faces identified wrongly out of more than 3 million scans. No wrongful arrests. But hundreds of correct arrests that would not have happened otherwise.
Tom: But if we move towards data-driven policing, isn’t there a risk that bias within the data could lead to injustice?
Alexander: Of course, you have a big challenge with potential bias in this context. You train the systems on existing data, which might not have enough representation of people from minority groups—for example, fewer non-European faces, so the algorithm is more likely to misidentify people. Or the data might have groups over-represented—for example, capturing historical overpolicing of communities or areas. The risk is that these biases are replicated, and even scaled up. Early versions of new tools are more likely to make such errors, and real-world experience shows that, if we are aware of this, and take active steps to mitigate it, it is possible to prevent these kinds of biases. This is something that needs to be built into the process of development and deployment. We see, for example, that facial-recognition systems are much more accurate today than they were 10 years ago. Not perfect, but much better, and providing better intelligence for officers to decide when they need to act. You could also have a kind of AI peer review, where one model might be trained to monitor another for replicating bias, or introducing new bias into the system—a watching-the-watchers situation. Again, this would be an improvement on the situation we have today, where much of this bias just passes unnoticed and uncorrected.
Tom: So, it’s not the sci-fi dystopian vision of crime-fighting, you’re saying?
Alexander: Yes. And the status quo is a uniformed police officer on the corner, standing in the rain, the sun setting, holding a printout from earlier that morning with blurry low-resolution pictures of the people they’re looking for. They make more wrongful arrests as a result of that situation than police officers sitting in a van with computer infrastructure, and a camera telling them there’s a person walking down the street with a child, and this person is on a sex offenders’ register, with court restrictions against being near children. The police officer can go and talk to this person. This is a real case, by the way—and it turned out to be someone building a friendship with the child’s family without their knowing he was on the register. No way would a police officer know this today, if someone just walked past them with the child. So it’s about looking at what we do, and how we can do better, rather than leaning into these fantasies of complete control.

3 NEW AI ROLES
Tom: You also advocate a radical new model for how governments operate internally. Could you explain these three concepts: the Digital Public Assistant for every citizen; AI co-workers for each civil servant; and a National Policy Twin for policymakers to simulate decisions.
Alexander: The Digital Public Assistant, either on my device or online, would be a system that connects information about you held by different parts of government—for example, your income level and your address—and is then able to say, “You’re eligible for this particular discount on your energy bill—would you like to have it?” Or it could support you during interactions with government officials. So much of our time is spent repeating the same things to different agencies, whereas here you might be talking to an unemployment adviser, and they can see your employment history or your qualifications, and suggest the right next steps for you so the job you find is the best fit for you specifically. Which might mean you stay in that job longer, and grow in it to have a fulfilling career. You could have a settings dashboard to decide how various AI agents interact with the government on your behalf. All this puts you in greater control.
Tom: What about AI co-workers for each civil servant?
Alexander: This is already starting to happen with chatbots, but that is the most basic version of it. You could have a suite of co-workers that looks at new cases, such as requests for support or applications for services, that a public-sector worker receives, and helps prioritise this, or find the information that the civil servant needs to make the best decision. The AIs don’t make decisions in place of that worker, but they make the worker much better informed, and save them hours of digging through regulations. There was a pilot experiment that showcased the potential for this in the UK government, involving employees of the Department for Work and Pensions who help jobseekers find employment. The employees, who act as work coaches for job seekers, were able to ask a large language model to explain various rules, to help draft documents, to prepare reports, and update records. Today, if a government employee has a question about when a claimant is eligible for a particular service, they might just search the internet. But you can have a system trained on the relevant rules, and gives you a quick and accurate answer. This saved about two weeks’ time per employee per year—and allowed these work coaches to focus on building relationships with the people who needed their support.
You can picture this across different parts of government. In procurement, you would have more informed advice about all the bids coming through, for example. Or if you think about how much time officials spend sending documents around for someone else to summarize when they are asked to prepare briefings and documents for government ministers—a lot of this work could be done much more quickly, so people have time to actually think about what it means, not just produce digests, and you could include a wider range of different sources so the information is more nuanced, accurate, and up to date.
Tom: Your third concept is an AI simulation of the entire country to test out policies.
Alexander: Yes, this gets exciting. We call it the National Policy Twin. Data is aggregated from different parts of service delivery, such as information on schools from the education department, and economic data from the statistics agency, and incomes data through the tax agency, and so forth. Together, it’s essentially a digital twin of your country, and you can run different policy scenarios informed by this data. At the moment, civil servants present a government minister with, say, three policy scenarios. If there are assumptions that the minister doesn’t agree with, they’ll say, “Give me three other scenarios based on different assumptions.” They wait for weeks, and then the process repeats. With the National Policy Twin, you could test ideas or intuitions very quickly, iterate on ideas, and ask for best practices from around the world, so that policies have a stronger evidence base—all in minutes, not days. You are not replacing the policymaking process. But you are speeding things up, so you can test more options. You are less likely to miss the right option because it never came up.
Tom: But isn’t the validity of a “digital twin” simulation dependent on the quality and comprehensiveness of the data available? And wouldn’t this risk biasing decision-makers toward whatever the data suggested rather than broader impressions, even if those broad impressions encompassed more wisdom?
Alexander: It is a danger. But it’s also a motivation to ensure your statistics agency runs well. This dramatically raises the importance of getting data right, and it’s something that not every government has really paid attention to. This would be helped if you build a whole data system, including Digital Public Assistants, where citizens can correct their information, leading to better data flows to governmental institutions. This is also where AI systems can interpret unstructured data, understand how it all fits in together, and provide informed advice. Again, AI is not making the decisions. It’s providing information for humans that was previously not available or not usable, and helping people to make sense of it, and make better decisions as a result.
OBSTACLES REMAIN
Tom: Another hurdle is decades-old IT systems in public services. Can governments overhaul this infrastructure at a pace that keeps up with AI development?
Alexander: Legacy infrastructure is a problem, and interoperability in government is something most countries are trying to tackle. In the UK’s blueprint for modern digital government, there is a plan to make every public-sector dataset interoperable in the next few years. This is the first thing we should do. Right now, some police forces spend 90% of their IT budget on maintaining legacy systems. If you’ve got legacy systems here and there, fine—spend 10% of your budget on that. But 90% should be spent on upgrading. You do this for two years, and it’s a hard push, and will be painful. But then we get there.
Tom: Another concern about using AI in so many parts of governmental work is that we risk losing democratic transparency, explainability, and the citizen’s right to appeal decisions made by algorithms.
Alexander: There needs to be human accountability for decisions made on the basis of this system. We need that built-in from the start. This needs to be sensitive to individual circumstances because, for every 95% of successful cases, you will have some cases where things didn’t work as expected. If we free up government resources by using AI, we can use those resources to make it easier for people to go and talk to someone when they need to, either because something went wrong, or because they are more comfortable with that way of dealing with the government.
WHICH GOVERNMENTS ARE TRYING THIS?
Tom: You published “Governing in the Age of AI” shortly before the July 2024 general election in the United Kingdom. It’s around a year and a half since Prime Minister Keir Starmer’s Labour Party took power. Are there lessons in what has or hasn’t happened regarding AI implementation?
Alexander: The UK has been among the more ambitious globally, including its AI Opportunities Action Plan and its blueprint for modern digital government. But there is a challenge when it comes to AI in government: how do you make it tangible for people, and how to balance risk and reward in doing so? If you are a political leader coming into office and thinking about this, how do you drive forward AI while maintaining public support? What are the quick wins where you can tangibly speed up the way that citizens interact with government, where you can improve that experience in ways that you can claim credit for? Part of the challenge that this government has arguably had is that not everyone has noticed the things it does.
Tom: What’s an example of something that has worked, but that people aren’t noticing?
Alexander: You have a problem since Covid in the UK, and in many other countries, with students not showing up for lessons. So what they’ve done is connect school attendance systems so that the government gets a daily record of the proportion of students who came to school the day before. But it’s not enough to just have data, so what they’ve done is build tools that explain to school leaders how they compare to other similar schools, and what profile of students might be seeing a gap in attendance. In one rural school, attendance kept dropping on Tuesdays, and the school didn’t notice until the Department for Education came with a tool that showed this trend. Then the school discovered that there was a bus that was always late on Tuesdays, so students just gave up and never came in. They hired a minivan for Tuesdays, and attendance shot up.
Tom: Which governments around the world are getting this right?
Alexander: We are at an early stage in this journey, even for the private sector, and certainly for governments, which tend to move slowly. But Singapore is doing well. And Estonia. And Ukraine, for obvious reasons: they’re having to break the current way of doing things; you have to figure out other ways. They recently launched a chatbot that Ukrainian citizens can use to get answers based on information from their digital ID. Australia is another country doing well, particularly on AI and education. The UK too. But there won’t be a simple list of “Five Ways That AI Has Transformed Government.” It’s going to be everyone doing a bit of something somewhere that adds up to a bigger picture. It’s not, “Are you promoting AI in your public service?” Everyone is. It’s: “Are you just making current processes slightly faster? Or are you genuinely thinking about deeper reform?”
Tom: Albania introduced a virtual AI minister to handle public procurement. What do you think of that?
Alexander: It’s quite an attention-grabbing announcement but is making a serious point: that AI can help cut fraud, improve efficiency, and save money in public procurement. But Albania has an even more interesting example of AI in government. They’re going through the process of applying for European Union membership, and that is both a bureaucratic process and a process of real reform, where you bring your legislation in line with European standards. So, you’ve got laws in Albanian, you’ve got European laws in English and French, and so on, and you need to find discrepancies, and update legislation, then implement reforms. That is an incredibly time-consuming process that has typically meant hiring hundreds, if not thousands, of lawyers and translators. It takes a decade to do this. But Albania is using AI tools to radically speed up this process. That is accelerating their accession process, possibly by several years.
Tom: We’ve talked a lot about the public services, but do you have thoughts on how AI could update democracy more broadly?
Alexander: If we get this right, the most noticeable impact will be improved trust because government can deliver rather than let things continue to slide into decline. Also, AI can introduce more transparency. Several countries have Freedom of Information acts, but it takes ages. There are local governments in the UK experimenting with systems where you type in a question, and if they have the data already, it’ll answer your question, just give you the data right there, and you don’t have to go through civil servants for it. There is also a philosophical reason why accountability could improve in the age of AI: the machine doesn’t make the decisions. Even if you have an automated system, there should be a person somewhere, thinking, “Let’s make a choice we are comfortable with.” If we get into that mindset, we make government aware that the human role is to make good decisions, and to take that responsibility very seriously. That, I think, will have a significant impact on democracy.
TAKEAWAYS
Tom: What final message do you have for policymakers trying to use AI in government?
Alexander: What’s really important is to carve out time for this thinking. As a public service, you’re always under pressure; you always need to deliver the next thing. Yes, AI will save time—but if you are just adding more work into those hours, you’re not going to get any gains. Carve out half the time that you save because of general-purpose AI systems to sit down with colleagues, and think how to improve your service. This requires leadership to say, “You have to do this.” We need a public-service workforce that is both more capable of this type of creative thought and experimentation, and is actually empowered to do it. At the moment, we have a pyramid shape with a lot of people doing a lot of repetitive tasks at lower pay. Those jobs are at risk because AI tools are good at doing those tasks at a fraction of the cost, and in seconds, not hours. What does that mean for the future structure of the civil service? Is it the same people doing different things? Is it fewer people? I don’t think anyone really has good answers yet.
Tom: What’s the biggest obstacle to your vision? And the best answer?
Alexander: The biggest obstacle is inertia. This future is uncertain, and government isn’t always good at dealing with uncertainty. The best answer is for leadership to take seriously the responsibility of updating government. Otherwise, we will be left behind. On the cost side, it’s not just hiring engineers or buying computers. It’s the cost of inaction that you need to weigh up.




I’m skeptical of AI simulations of the country. Obviously this would be great, but it would also be very valuable ever created since you could predict asset returns with it. I don’t think hedge funds don’t have a simulator like this for lack of trying, it’s more technically infeasible