Julian Jacobs recently sat down for a discussion with Carl Benedikt Frey, an economic historian and professor of AI & Work at the University of Oxford. Carl’s books include The Technology Trap and the newly-released How Progress Ends, where he challenges the conventional belief that economic and technological progress is inevitable.
In the discussion, we explore the lessons from previous technological shocks, and their relevance, or not, for AI. We explore the growing evidence base on AI’s economic impact and conclude with Carl’s rapid-fire forecasts. The below is a condensed and edited version of the discussion. Enjoy!
The origin story
Julian: Let’s start at the beginning. What initially drew you to studying the economic impacts of technology, and AI in particular? Which questions were most interesting to you, and have those changed?
Carl: When you consider that being human was miserable for a long period of history and then look at the material prosperity we have today, you start thinking about why the first Industrial Revolution happened and why it took so long. When you start thinking about that, it’s hard to think about anything else. That’s what put me on the path of studying economic history and trying to understand how technological progress happens.
There are several interconnected questions that were initially of interest to me. The fact that modern technology is, in principle, available almost anywhere, yet so many places around the world are poor, is a significant question that is not answered conclusively. Also, human existence remained pretty miserable for a long time. Many of the technologies that made the early Industrial Revolution were not very complicated, yet it took a long time for people to conceive and adopt them.
More fundamentally, technological progress is the driver of growth and prosperity over the long run, but there have been many hiccups. Many people lose their jobs. What economists regard as the “short run” can be a long time for some, so it’s natural for certain groups to resist it. A key question is: how can you put mechanisms in place that aid that transition and make technological progress more inclusive, not just in the long run but in the short run as well?
Lessons from History: adoption lags, new tasks & steering technological progress
Julian: People often argue that AI’s economic impacts are deeply uncertain. And of course that’s true. But it can also cause us to overlook how much social science and history can teach us, for example about the ways in which past technology shocks interacted with labor, wages, and the economy at large. When you look at past technological shocks, are there consistent dynamics, principles, or lessons to call out?
Carl: I think there are several lessons. One is that it often takes longer than you think for these things to feed through. You can have exponential technological improvements without exponential economic growth. We’ve seen this throughout the history of computing; the growth rate in the economy does not mirror Moore’s Law by any stretch of the imagination.
Part of that is because for technology to have an impact, it needs to be put into use. For it to be put into use, humans have to want to use it, and we need institutions in place that permit us to use it. There are interest groups not interested in seeing their jobs and incomes disrupted. There are firms not interested in seeing their business model overtaken. There are complementary investments into skills and infrastructure that are needed. Those factors feature whether it’s the first Industrial Revolution, the second, the computer revolution, or today with AI.
A second lesson is that productivity growth is not going to be significantly higher unless technology creates new tasks, activities, and industries. If all we had done since 1800 was automation, we would have productive agriculture and cheap textiles, but not much else. We wouldn’t have rockets, antibiotics, vaccines, computers, or AI. Most improvements in the standard of living come from doing new and previously inconceivable things.
Productivity only surged in the late stages of the first Industrial Revolution. Just mechanizing textile production didn’t lift the growth rate that much. It was with the railroads and the chemical industry—and later in the mid-20th century with the automobile and electrical industries—that we saw a significant uptick because they created a lot of new tasks. We have seen some of that with the computer revolution, but not yet to the same degree. We haven’t yet seen much of that with AI, but hopefully, it’s to come.
Julian: So we have two key lessons: adoption matters a lot and to boost productivity, technology has to create new tasks, activities and industries. When it comes to inequality, for a technology shock to boost productivity, must it also boost inequality, even in the short term? And how much can governments steer the ways in which technological shocks unfold throughout the economy?
Carl: It depends. If technology takes a more labor-replacing form, you’re more likely to see inequality rise, backlash against technological change, and the labor share of income fall. If it takes a more enabling form, the opposite is more likely. You’re going to see the labor share of income potentially rise or remain stable, growth being broadly shared, and less unrest.
Now, as to your question about if we can steer technological progress, I think it’s extremely difficult. We don’t know how AI is going to evolve without intervention, and we’re not sure how it’s going to evolve with intervention. Often when you intervene, you favour one interest group at the expense of another. You might favour existing interest groups rather than future, more dispersed groups who don’t know that they are potential beneficiaries. It’s very hard to do in practice, and we should be humble about our capacity to do that.
That’s not to say that intervention is impossible. If you have a clear objective, like greener energy, you can tax fossil fuels and subsidize renewables. When the goal is to make growth more broadly shared, you can invest in educational systems and training programs, but many of those have not been very successful. It’s hard to know what skills will be needed. Not so long ago, there was a big drive to invest in coding skills here in the UK; that seems not to have been the most productive bet right now.
AI’s economic impact, so far
Julian: If we think about AI’s impact on the economy so far, what can we say, with any confidence, if anything?
Carl: I’ll note that what we have evidence for today may not necessarily apply in five years.
As things stand, the evidence comes from two main strands. First, the exposure studies: these map which tasks and jobs may be most affected by generative AI and, on balance, they find higher-skill, higher-education roles are more exposed. Second, experimental studies in settings like customer service, writing, and coding show the largest productivity gains accrue to novices and lower-skill workers, which suggests AI lowers barriers to entry.
AI is also eroding language barriers: it’s now far easier to collaborate and transact across languages, which makes exporting services more feasible for many workers. If those patterns persist, I’d expect firms to offshore a greater share of professional-services activities—accounting, management consulting, financial modeling—because AI compresses productivity differentials and reduces the language friction that used to favour onshore talent. Indeed, early evidence indicates that entry-level work has been disproportionately negatively affected by generative AI, which I strongly suspect reflects the offshoring of such activities.
As for what we don’t have evidence for, I would point to productivity effects. Most writing about AI is very speculative at this stage. We observe trends in aggregate productivity, but it’s not very impressive and doesn’t point towards AI having a material impact. There is very little evidence of an impact on labor markets or productivity, which speaks to the fact that their impacts so far have been relatively limited.
Julian: Some economists now argue that AI is reducing employment among recent graduates. But it’s extraordinarily difficult to disentangle any potential effect from AI from other macroeconomic drivers, like interest rate shifts. What do you think?
Carl: Vanishing employment opportunities for recent graduates predates generative AI. It’s quite possible that AI has exacerbated it, but it’s not a new AI-driven phenomenon. At the same time, we might be at an inflection point. You could imagine yourself in 1960 in Pittsburgh, seeing minimills emerging and saying, “Look, I’m not seeing anything, the integrated steel mills are still up and running.” Obviously, a decade later, everybody in Pittsburgh would be feeling the impact.
The Future of Work
Julian: Fears of job loss rank high in surveys of public attitudes to AI, and many observers have sketched scenarios where few jobs remain. What do you make of these scenarios? It strikes me that, despite fears of a ‘jobless’ future, there are actually plenty of industries where we desperately need more human workers— healthcare, childcare, and elder care. And of course people could also contribute in other ways, beyond traditional employment.
Carl: I’ve been puzzled by how good AI is and how little it’s showing up in any statistics. If you take the latest reasoning models, they are plausibly a better tutor than I am in any subject, including in my area of expertise.
I think where they don’t do particularly well is dealing with novelty. What you want in a changing world are systems that adapt to new circumstances and can learn from just a few examples. I don’t think we’re there yet, and it’s unclear to me how rapidly that is going to happen. Given what we already have and the limited impact it has, I struggle to see a world of 40% unemployment in 20 years.
As a thought experiment, suppose AI could do almost everything. What work remains? We still watch professional chess despite computers being stronger. Humans value competition, authenticity, and status; there will be activities we do for their own sake. A large share of employment would likely persist in caring roles—health, childcare, eldercare—where presence, trust, and relationships matter. Symbolic and representative roles—politicians, clergy, community leaders—don’t disappear. And there’s in-person service work people want as part of the experience, plus new categories we can’t yet imagine.
If unemployment did climb dramatically, the central question becomes distribution and meaning. We’ve been through a transition in which work has become deeply associated with our status and our place in society. We don’t need full-fledged automation for that to change. It’s not clear that in the age of AI roles like lawyer, consultant, or professor will have the same status associated with them. Given that we already transitioned from a society where we didn’t define ourselves by work, I guess we could transition back to a society where it’s not the key thing. It would be a major transition, but not an unprecedented one.
The Technology Trap & How Progress Ends
Julian: A key theme in The Technology Trap was the idea that technological progress could lead to more economic and political polarisation? Do you see this happening with AI?
Carl: I do worry that we are. I published that book in 2019, before generative AI. What’s changed since then is that the category of people most exposed to this technological trend is increasingly in professional services. But not because AI is going to automate everything they do, more because of this combination of AI and offshoring exposure. From the viewpoint of the American professional service worker, does it matter whether somebody else does their job in the Philippines or whether it’s outright automated? No.
What is different is the political economy of these changes. The people impacted by this are much more likely to write an angry op-ed in the FT than the average factory worker who felt the impact of industrial robots. Going forward, the people who stand to lose will have much more of an impact in writing the rules and regulations around the technology, and I suspect are more likely to do so in their own favour.
Julian: Moving to How Progress Ends, the thesis is that we need both decentralization to enable innovation, as well as the right kind of ‘bureaucratic context’ to scale it sustainably. Can you unpack this idea for us, with some examples?
Carl: The core idea is that durable progress needs two different institutional settings at two different stages. Early on you want decentralized exploration so lots of independent actors—entrepreneurs, labs, investors—can pursue competing designs. This is because we almost never know, ex-ante, which path will win. It’s vital that a “no” from one gatekeeper doesn’t kill a technology. Search is a nice illustration: some prominent investors passed on Google, but others—Sequoia and Kleiner Perkins in 1999—backed it, and that pluralism let the better architecture surface.
Once a design looks promising, the problem flips to scaling under a bureaucratic context—standards, procurement, liability rules, regulation, and infrastructure that can push costs down and manage externalities. The mRNA vaccines show the sequence: small, decentralized firms pioneered the platform, but mass deployment depended on highly bureaucratic processes—regulatory review, pharmacovigilance, advance purchase agreements, and a government-coordinated cold chain.
You see the same pattern with electricity and automobiles: tinkerers explored incompatible systems; scaling required standard voltages and frequencies, grid interconnection, building codes, road standards, licensing, and insurance. Where systems are too centralized—think the Soviet design-bureau model—experimentation narrows and promising ideas die when a single funder says no. The point isn’t “state versus market” but sequencing and balance: let many flowers bloom in discovery, then rely on capable, rules-based bureaucracy to diffuse and discipline the winners so innovation scales sustainably.
Policies of least regret
Julian: We seem to have a poor understanding about the policies that can best support an economy’s adjustment to technological change. Looking back at the history, are there any “least regret” interventions, with evidence in their favour? For example, many people proposed public retraining programmes. But, as we’ve written, there’s little reliable evidence that many programmes work very well.
Carl: I think Danish “flexicurity” in general is the right pathway. It provides flexibility, meaning that firms can pivot when technological shifts happen, and security, meaning when you lose your job, you have some welfare. Historically, Britain in the 18th century was the only economy in the world that taxed itself at 2% to provide for the poor. In places where the poor laws were more generous, you had less unrest accompanying industrialization. We know that providing security for people makes them more likely to go with the flow when it comes to technological change. I think that same intuition still applies today.
Quickfire Predictions
Julian: US and UK productivity has hovered at ~1% since 2008. If you had to guess, over the next decade, do you expect AI’s average annual percentage contribution to productivity growth in the US and Britain to be closer to 0.1 percentage points, 2 percentage points, or more?
Carl: Closer to 0.1 percentage points.
Julian: Will AI’s long-run economic impact be greater than, less than, or equal to the Industrial Revolution?
Carl: I think it’s going to be less impactful than electricity and the internal combustion engine, but more impactful than the first Industrial Revolution.
Julian: Will AI primarily increase or decrease inequality in the short to medium term?
Carl: I think it will decrease it globally, but increase it within advanced economies.
Julian: Will AI make today’s low- and middle-income workers better off over their lifetimes?
Carl: Absolutely. Most people are both producers and consumers, and it’s definitely going to make them better off as consumers. I think carpenters and plumbers will grow in status, while some knowledge work will decline in status.
Julian: Regarding AI’s impact on society, what is your greatest hope and what is your greatest fear?
Carl: My greatest hope is that AI will boost productivity and economic growth and solve our economic problems—and there are many of them. My greatest fear is that it won’t happen.