AI doing human jobs: It’s a vision that thrills some, terrifies others. Yet visions alone will not suffice. The world needs data-based evidence, as only a few economists have yet attempted. Among the most prominent is Sam Manning. Back in 2020, Sam realized that vast technological change was coming, and that it would affect much of what he cared about, from employment and poverty, to income inequality and global health. So he devoted himself to using economics to better estimate that future, studying future impacts with OpenAI from 2021 to 2024, then in his current role as senior fellow at the Centre for the Governance of Artificial Intelligence, GovAI.
In a recent conversation with AI Policy Perspectives, Sam explained what economists know about AI’s effects on jobs, how this technology may differ from those of the past, and what he believes policymakers ought to do next.
—Julian Jacobs, AI Policy Perspectives
[Interview edited and condensed]
Julian: It’s hard for economists to measure AI’s economic impacts, because the shock is primarily a speculative one that is not yet fully borne out in data. Could you talk through the primary methods they are using?
Sam: I’ll focus on the empirical methods. The first category tries to estimate the ‘exposure’ of different jobs to AI. Researchers take descriptions of the tasks that people do in their jobs and map them to the capabilities of AI systems. When there is a high degree of correlation, this suggests potential impacts on the labor market.
A second category is experimental work. Here, researchers give a group of workers differential access to an AI system and then observe how this access changes economic outcomes, such as their productivity, how they use their time, or even the quality of their work output—for example, do software developers produce more or less production-level code when they use these systems?
Both approaches have limitations. With the exposure studies, a high correlation between a worker’s tasks and an AI model’s capabilities often gets interpreted as meaning that the worker’s job will be automated and they will be displaced. I think that’s definitely not the case. Rather, what it suggests is that the technology is more likely to provide a ‘shock’ to the productivity of these roles or lead to changes in how the work is performed. Whether the productivity gains from AI are positive or negative for a given worker depends on various factors, including which tasks within a job are affected and how elastic the demand for that job is. For example, if workers become more productive but demand for their output remains stable, fewer workers are needed to meet the same demand, and layoffs could ensue. On the other hand, if demand increases significantly—outpacing the newfound productivity gains from AI—then this could drive a firm to hire even more workers or raise wages to retain their best employees.
Julian: So, “exposed is not hosed” as some say. It may be beneficial for certain employees to be exposed to AI and damaging not to be exposed, or vice-versa. What about the experimental methods?
Sam: The key limitation with the experiments is that it’s very difficult to vary workers’ access to an AI system in their natural work environment. Instead, a lot of research—including papers that I’ve worked on—tries to take workers out of their natural work environment and give them tasks that are representative of this work. For example, we ran an experiment with law students last year where we varied their access to reasoning models and evaluated their performance on a set of legal work tasks – writing memos, producing legal research briefs, that sort of thing. We were able to measure effects on time saved and on quality, but ultimately the example tasks that we used don’t exactly mimic the complexity of lawyers’ daily workflows, which often involve certain forms of collaboration, different software tools, and case-specific contexts. Because of this, there’s only so much one can generalize from that kind of research to the broader economy.
Julian: What about methods that try to get closer to the natural work environment? For example, some researchers are looking at real-life queries from LLM users to better understand how they are using LLMs in their jobs. Others are evaluating AI systems on higher-fidelity simulations of the tasks and projects that employees perform.
Sam: I think these are all steps in the right direction. I’m a big fan of GDPval-style work, which tries to evaluate AI systems’ performance on a wide set of tasks drawn from real-world work settings. I think this is the state of the art right now in terms of measuring performance on economically valuable tasks. In my view, improvements on this benchmark could actually be a meaningful indicator of advancement in the potential economic value of models. However, it doesn’t address the question of how to ensure the widespread integration of AI models into the economy, which would be necessary to actually realize those benefits.
Similarly, data from efforts like Anthropic’s Economic Index is especially useful for connecting capabilities to actual changes in economic indicators. For example, if we know what tasks workers are using these tools for, then we can track adoption over time alongside employment and hiring data. This can give researchers and policymakers a better empirical sense of what trends might be emerging in jobs and sectors where AI is being heavily adopted.
What do we know so far?
Julian: What do you think, with relatively high-confidence, about how AI will affect jobs? And what are you most uncertain about?
Sam: At a high level, I think it’s safe to say that AI systems are going to change most white-collar jobs in the economy. They will eliminate some jobs and make it harder for people to enter certain fields. On the other hand, as a true general-purpose technology, AI will have many sprawling arms throughout the economy and is going to create many new work opportunities for people.
Similarly, I would be surprised if, over the next decade, we don’t see meaningful improvements in productivity and economic growth across industrialized economies. For the US economy, I think something in the range of a two to three percentage point increase in economic growth rates over the next 10 years is possible. I’m pretty confident that in the next five years, we’re not going to have 25% or 30% economic growth, which I’ve seen predicted by some folks. But that doesn’t minimize the incredibly substantial impacts of, for example, doubling the current rate of economic growth.
I also expect AI to increase income and wealth inequality over that time. My default expectation is that the returns to owning capital are going to increase relative to the pace at which the returns to labor income will increase.
One uncertainty is about the pace of AI capabilities improvements and the ultimate level they could reach. We also have uncertainty around the pace of adoption—how widely and quickly organizations will adopt these systems. There’s also uncertainty around how cost-effective automation will be. For example, if automating a large share of work requires investing lots of compute resources at inference time, it could be quite costly for some time. As long as compute is scarce, we will shift our allocations toward the most high-value tasks, which will drive up prices for inference, which will affect adoption. These things are really hard to predict.
Julian: You mentioned labor’s share of income, relative to capital. Dwarkesh Patel and Philip Trammell recently argued that AGI and advanced robotics could make capital a perfect substitute for labor, rather than a complement, causing the share of income going to capital owners to rise to 100%, and necessitating a high progressive tax on capital. Brian Albrecht (and others) pushed back on some of the claims. How do you view this?
Sam: Rising inequality is definitely a concern of mine, but I am pretty uncertain about whether AI-driven automation will increase inequality to the extent Phil and Dwarkesh discuss in their piece. If automation takes off in the way that the piece describes, then assuming competitive markets for deploying AI, real incomes should also rise as goods and services become cheaper. There is a scenario where labor displacement and falling end-user AI costs could move roughly in parallel, so that by the time you reach the full automation scenarios they speculate about, access to large numbers of superintelligent agents would be effectively free. Such widespread access to extremely capable AI systems could be a powerful counterweight on potential harms from a more skewed capital/labor share.
Life after work?
Julian: Such a scenario raises fundamental questions about how society will be organized. Who is going to continue working? What will people do with their time if they aren’t working? What will the distribution of wealth and income look like?
Sam: This is an institutional and governance challenge. What do we do in a world where we do not need to work in order to ensure our material well-being? How do we take advantage of the incredible potential for material progress and maximize our flourishing? The challenge is to figure out the right redistribution mechanisms, technological access models, and property rights for this future economy.
And to your question about work, I will say that many people already don’t ‘work’ for income; they take care of loved ones or have chosen to retire. Much of the world doesn’t really see work as an innate piece of their identity. One great thing about labor markets is they incentivize people to do things that other people find useful. In the future, we might want to retain some sort of incentive structure for people to use their time in ways that create positive externalities for others—perhaps a market for being more engaged in your community, taking care of others, raising children, or contributing to scientific and moral progress? These are questions about how to redesign our institutions to support this future.
Julian: A common proposed policy response to AI is a Universal Basic Income, or some variant of that. Thinking back to your prior work on cash transfers and UBI, what do you make of it? Is there some version of it that you think can work?
Sam: I’m broadly in favor of policies that expand individuals’ opportunities to flourish in line with their own aspirations. Reducing financial constraints through something like a UBI could be one way to do that, but I’d be surprised if it were sufficient on its own in a world with far fewer job opportunities. Another important lever is ensuring broad access to technologies that can make people more productive and expand their capabilities. That kind of approach may rely less on taxation and redistribution, while supporting more inclusive and widespread economic participation.
The state of AI economic impact research
Julian: What do you think about the current ecosystem of people working on AI economic impact questions? Who would you like to see more involved?
Sam: I’m encouraged by the growth in the number of people working on it, both with respect to established economists and people just entering the field. I’ve seen a big change over the past four or five years. In 2020, there was maybe one economist I can think of who was really taking the prospect of transformative AI seriously. Now, you go to a standard economics of technology conference, and many people are grappling with this, which is super encouraging.
The economic impact of AI is probably among the most important things for researchers to figure out. There are big open questions and big ways to get AI progress wrong. For example, we could eventually end up in a world where we get 10% economic growth in the US and still have hundreds of millions of people living in extreme poverty globally. That would be a big failure in my mind.
I also think there is a lot of room for political economists and theory work to play more of a role in shaping institutions. I believe the US government will probably be the most consequential actor in shaping this technology’s impact, not just in the US but globally. The trouble is that we have an evidence dilemma, where we’re trying to do anticipatory policymaking without clear evidence. Policymakers need to weigh these trade-offs carefully because, given the pace of progress, not doing enough anticipatory planning could result in less than optimal path dependencies for the future. We need more people entering government and figuring out how to usefully inform key actors.
Julian: Given the slow timelines of academic publishing, particularly in economics, are you concerned about research quality as researchers move to preprints and other ways of sharing research?
Sam: Broadly, I am concerned about the move away from peer review. So much policymaking and so many key decisions are now being made based on preprints and even essays on Substack. While there is so much useful content on these platforms, we need to find some sort of middle ground to generate high-quality evidence.
I’m excited about a couple of options. One is having journals quickly review a study’s methodology and pre-analysis plan and make a publication decision based on that, without needing to know the findings. The decision would be based only on the methodological approach meeting a standard of rigor. Another is more open review, where work is published and then publicly critiqued. This creates transparency around what leaders in the field think.
Dream experiments
Julian: If you could run a dream AI economic impact study, without any resource restrictions, what would it be?
Sam: For the ideal study, I would work with a developer before they release a new model with a large capability increase. I would take a large, representative sample of businesses and, before the model is widely deployed, randomly assign access to it at the enterprise level. Then I could observe the causal impact of deploying this next-generation system on outcomes like productivity, demand for different skills, firm growth, and task reallocation over time. Having this kind of infrastructure would provide policymakers and society with more foresight.
This probably won’t happen. Something more practical, though still challenging, is data collection. The AI labs know where their products are being used across the economy and for what types of tasks. If we could harmonize this usage data and pair it with government or private sector data on occupational transitions, wage changes, and skill demand, we could build trend lines over time. This would allow us to move away from policy discussions based largely on speculation. We could see where AI is creating growth and where we have vulnerable workers who are having a harder time finding new work after losing their jobs. This is doable with better public sector data collection and more partnerships with industry. We should be pushing on it.
Hopes and concerns
Julian: To close, what are you most excited about as AI diffuses in the economy, and what are you most concerned about?
Sam: I am most concerned about how it’s going to impact my children. I am anxious about what human-AI interaction and relationships are going to look like in eight years or so when my kids are ten-plus.
I am most excited about the prospect of AI being used to expand many ambitious people’s capabilities and our collective aspirations for what we can achieve. I’m also excited for the health benefits that I expect are likely to come from advances in science and R&D.


