Today’s essay is the fourth entry in a seven-part series focused on questions of responsibility, accountability, and control in a pre-AGI world. This piece, and those that follow, are written in a personal capacity by Seb Krier, who works on Policy Development & Strategy at Google DeepMind.
Following the last essay looking at the defensive potential of AI, I’m back with another piece attempting to make sense of where we are now and where we are going. This time, the focus is on AI and economics. An important caveat: these are my current high-level thoughts, but they're certainly not set in stone. The nature of AI means my views evolve often, so I highly encourage comments and counterpoints. Over the coming weeks, this series will include the following essays:
→ How do you deal with faster disruption and technological displacement in the labor market, both nationally and internationally?
What does a world with many agentic models look like? How should we start thinking about cohabitation?
Are there ways of reconciling competing visions of fairness in AI?
How can we balance proliferation and democratization with appropriate oversight when developing and deploying powerful AI systems?
Huge thanks to Kory Matthewson, Nick Whittaker, Nick Swanson, Lewis Ho, Adam Hunt, Benjamin Hayum, Gustavs Zilgalvis, Harry Law, Nicklas Lundblad, Seliem El-Sayed, Pedro Serôdio, Brian C. Albrecht, Juan Mateos-Garcia, and Jacques Thibodeau for their very helpful comments! Views are my own etc.
How do you deal with faster disruption and technological displacement in the labor market, both nationally and internationally?
There are a lot of discussions about AI and jobs, but for some reason I rarely feel very satisfied with the outcomes. Very often important details are assumed away, and artificial intelligence is often thought about in the same way as any other technology, like blockchain or computers. Yet my view is that there is something qualitatively different about AGI: specifically the idea that through general intelligence, you can eventually automate a very wide and deep range of tasks - in fact, any task that would require some level of intelligence or creativity. This of course is premeditated on some important, and highly uncertain, assumptions: that extremely capable AGI agents will be developed this decade, and that they will be deployed and used ‘in real life’ very rapidly. So what would the implications be?
At the national level
Unsurprisingly, the net impact AGI will have on growth and jobs is currently unclear. In my view, the best discussion there is relating to growth is between Besiroglu and Clancy here:
Besiroglu argues there is a 60-80% chance of explosive growth, defined as 10x the historical rate for at least a decade, if sufficiently advanced AGI is developed. His model is based on semi-endogenous growth theory, which predicts accelerating economic growth as population and thus innovation increases. As AI substitutes for human labor, the effective "population" of idea producers can grow exponentially. Even incomplete automation provides compounding benefits.
Clancy assigns a 5-10% chance to explosive growth. His model assumes many tasks are interdependent, so human bottlenecks persist even with extensive automation. Historical automation did not accelerate growth. Accelerating or full automation is required, which he views as improbable due to factors like scarce resources, inadequate data, regulatory lags, and time constraints.
But these look at wider growth dynamics and don’t look at labor impacts. This piece by Ben Evans is instructive and points to two useful principles:
The Lump of Labor fallacy shows why total employment doesn't decrease. Productivity gains from automation have historically generated economic growth, new needs, and new jobs. Past unimaginable new jobs were created: in fact, over 60% of jobs done in 2018 had not yet been “invented” in 1940; partly thanks to technological advances (e.g. textile chemists) and partly because of rising incomes (e.g. beauticians).
Jevons’ paradox is also relevant - efficiency gains drive more, not less, resource use. Automating clerical work generated more clerical work and changed business processes. The pattern repeated with PCs and software.
However this doesn’t deal with two things: first, the nature of jobs might change significantly and quickly. Second, even if the net effect is positive, this does not mean that local impacts in places, occupations or sectors won’t be disruptive. And third, if Besiroglu’s model is right, it’s very plausible that advanced AI will make most human labor obsolete and that explosive growth is likely despite a decline in labor demand. My hunch is that over time, as capabilities continue growing, there will be a point where most labor will be cheaper and more productive to outsource to AI agents as opposed to humans.
In economics, cheapness is defined in relative terms. For AGI agents to be considered ‘cheaper’ than human labor, their next best alternative task must be lower in value than a human's - so it could be argued this is unlikely if AI is driving explosive growth. My claim however is that if very capable AGI agents can multiply and copy themselves quickly, it would still be economically advantageous for them to perform low-value tasks, even if they are also being used for high-value tasks in parallel. In other words their scalability and adaptability means they will likely outcompete human labor across a wide range of tasks, including low-value ones, and drive the price of labor to below subsistence levels for tasks that humans might be able to carry out. Of course whether we get such agents remains to be seen - and the cost of inference will likely grow quickly, which may further limit where agents are deployed! So it’s important to note that a lot rests on a more dynamic compute/cloud market, and some important algorithmic breakthroughs.
In this kind of world, at first, humans may continue doing these low-value tasks for some time - particularly tasks that require physical labor, at least until AGI is used to make significant progress in robotics. But as AGI systems gradually develop and are deployed in physical environments, ultimately they will take over this kind of work too. In addition, more capable agents would be able to pivot quickly to tasks that can evolve faster than human adaptability or comprehension, thereby making humans incapable of handling the requirements of "work" in the future. This is an important crux though and affects some of the arguments in the rest of the essay, as it could determine how quickly and comprehensively AGI will transform the nature of work. In practice, these changes will of course not happen overnight.
The above invites three questions:
Whether and how we should shape the qualitative in way jobs will morph over time.
How to deal with local impacts and disruptions in particular sectors.
What the world looks like if job destruction outpaces job creation, with possibly explosive growth.
With regard to the first question, let’s look at teachers: for now we're nowhere near automating them. But if in the next few years AI agents end up being far more effective at personalized teaching, grading and tutoring, it’s reasonable to assume that demand for them will grow and the role of a teacher will morph: for example by focusing on the many other non-teaching related tasks they have. Maybe what they will be doing will look more like algorithmic management, curating education data, providing pastoral care to students, or fine-tuning different educational models, instead of direct teaching. Will ‘educational agent fine-tuners and evaluators’ become its own dynamic sector in the future? Is there some scope for humans to work alongside AI in managing, overseeing, or directing the development and deployment of these new technologies? Maybe in the short run, but in the long run I’m not betting on it. Ultimately the new tasks and roles created through these transformative dynamics strike me as entirely automatable too. I’m not sure what the value-add of a human here would be over an equally or more capable (and cheaper) AI agent. So while there may be some sectoral transformations and creative destruction, ultimately these transformations will mostly lead to higher demand for AI agents - not human labor. The pace of these transformations matters a lot: if the shift towards AI-driven education happens too quickly, it could lead to significant economic and political disruption, as people often struggle to adapt to rapid changes in labor markets and social structures.
For the second question, think about call centers: we can plausibly expect this to be a sector that will experience heavy displacement of human labor. Even if in the short run the net impact of AI on jobs is positive, this isn’t of much comfort to the low-pay workers whose alternative job options are also probably prone to automation. It's true that some manual labor jobs, such as electricians, currently offer high wages and are in high demand, suggesting that not all low-skill jobs are low-paying or immediately prone to automation. However, even these jobs may be susceptible to automation in the long run, and the existence of specific labor shortages doesn't negate the broader concern about widespread job displacement. I never bought the policy narrative of ‘we should teach truck drivers how to code’. So regardless of how you look at this, it seems necessary to consider whether existing social security and welfare systems are fit for purpose in a world where automation rapidly transforms sectors. Yet beyond the occasional RCT on UBI, there seems to be little thought given to this. My concern is that failing to explore this early will only further cement short-sighted calls for limiting (ultimately beneficial) automation. This seems futile, a bit like trying to protect the equine industry from the arrival of cars; and harmful, because it would limit huge wealth creation that will ultimately make the world richer.
The last question is the one I’m most interested in. The ‘promise of AGI’ is also supposed to generate outsized returns to the companies. If Besiroglu’s model is right, it’s plausible that countries leveraging it successfully to boost productivity could make a lot of money - even without explosive growth per se.
Many predictions from traditional economists rely on dynamics we’ve seen with other technologies; the ILO, for example, predicts that the impact of the technology is likely to be of augmenting work – automating some tasks within an occupation while leaving time for other duties – as opposed to fully automating occupation. As I made clear in the introduction, I don’t really buy this: I think many of these assessments (reasonably) index on existing language models, whereas I suspect models will likely get far more capable. A recent study by Korinek and Suh however uses a novel "compute-centric" model that represents work as consisting of tasks that vary in their computational complexity. They post that if task complexity is unbounded, wages can rise indefinitely if the tail is thick enough. However, if the Pareto tail is too thin, automation will outpace capital accumulation, leading to a collapse in wages. My bet and assumption throughout this piece is on the latter, given my assertions about fast growing AI capabilities and bounded task complexity for humans. This means three possible impacts:
First, growth could initially accelerate as machines rapidly displace human labor - but may slow down as automation exhausts the finite set of human-performable tasks. If however the process of innovation and R&D itself can be automated, then you get - as Besiroglu posits - explosive growth. I think this is not unlikely, as I’m optimistic about agents being able to automate scientific R&D at some point in time, possibly in the next 10 years. But automating R&D with no oversight could be particularly risky from a safety point of view, so it’s likely this will be a ‘managed’ process in practice - how much this will slow down growth is unclear (even if not explosive, it could still be very high).
Second, it’s possible that if agents are easy to copy and very cheap, then automation can lead to increased output and productivity, which could translate to much lower production costs - and therefore everything becomes much cheaper. Whether this leads to quasi-abundance is less clear: if AI does lead to mass labor displacement, it could cause a significant decrease in consumer demand since unemployed people would have less money to spend on goods and services. In addition, a lot depends on who earns the returns from what AGI generates and where property rights lie.
This leads me to the third point: Korinek and Sun find that wages might initially surge as agents displace human labor but will eventually collapse, even before full AGI is reached as automation exhausts the finite set of human-performable tasks (note that this could be mitigated by increased demand for goods and services from AI agents). And what happens then? How do you make sure everyone benefits from this - both the resource itself but also its economic fruits?
Part of the answer depends on whether we expect this accumulation of wealth to only affect a few companies at the frontier (if indeed the cost of training very powerful frontier models will be high and require increasingly more compute), or a more decentralized dynamic market where frontier-like capabilities can be easily trained by a large number of actors (if costs to develop such models ultimately end up decreasing through algorithmic improvements and Moore’s Law). In either scenario, perhaps the gains from automated innovation and increased productivity can be redistributed through policies like UBI or windfall taxes; hopefully before we get there, sociotechnical work on e.g. multi-agent simulations can help us come up with better models to think about this.
Ultimately, the downstream impact of AI will also depend heavily on the regulatory environment. Overly restrictive regulations could slow AI adoption and limit its potential benefits, while under-regulation could risk large scale harms. Calibrating the right policy levers will also require carefully tracking the impacts of AI across different sectors, regions and demographics. Is productivity accelerating as expected? Are certain industries or geographies being left behind? How is the labor market evolving? Granular, real-time measurement - as well as improved state capacity - will be essential for adaptive policymaking.
At the international level
Shifting our gaze internationally, the challenges presented by AGI take on a different hue: how do you make sure poorer countries don’t fall behind even further? While it's clearly not possible for countries like Oman or Nepal to invest hundreds of billions in developing cutting-edge AGI models, it's also unclear whether this is necessary. Open-source models or commercially available AI systems may be sufficient and more economically viable for these nations. Of course this still requires important infrastructural investments, which AI governance crowds usually assume away: data centers, high speed broadband, a stronger IT sector, better data collection pipelines and so on. Policymakers interested in the global impacts should certainly look at incentivizing this through local economic reforms, foreign direct investment, better targeted aid, and partnerships with labs.
Assuming AGI does get deployed through products and services in poorer countries, there will be complex challenges at play. The impact of AGI on labor markets in LMIC-specific contexts may be more pronounced. While there will invariably be hugely positive effects (e.g. on life expectancies, education and so on), the short term disruptions and welfare costs could lead to skepticism or opposition in recipient countries. The nature and extent of these disruptions will depend on the specific mechanisms through which AI affects these economies. If AI adoption primarily occurs locally within developing countries, it could lead to labor market disruptions as machines and algorithms replace human workers in various sectors (as established above). However, this localized adoption could also bring productivity gains and enhance the competitiveness of these economies, potentially offsetting some of the negative employment effects.
On the other hand, if AI adoption mainly takes place in developed countries and leads to changes in global trade patterns, the consequences for developing economies could be more severe. For example, if AI enables advanced economies to automate and reshore tasks that were previously outsourced to developing countries (such as call centers or manufacturing), it could significantly reduce the global demand for goods and services produced in these countries. This could render some developing economies increasingly irrelevant in the global marketplace, leading to reduced export revenues, slower growth, and higher unemployment. It’s unclear to me how easily developing countries could find new areas of specialization and export opportunities.
Regardless of the specific mechanism, the short-term disruptions and welfare costs associated with AI adoption are likely to be significant, and many developing countries may lack the resources to provide adequate social safety nets or implement policies like UBI to support affected workers and communities. UBI however could also potentially make the transition more painful as it effectively shuts people out of labor markets. This could lead to increased social unrest, potentially fueling skepticism or opposition to AI adoption in these countries. So it’s important to consider how the international community can support developing countries in managing the transition. One desirable option is to facilitate immigration flows and open up labor markets further in richer nations: this is generally positive for both receiving countries (including wages) and source countries, but this alone is not a panacea. There are also often trade-offs in practice that may lead to suboptimal government policies favoring protectionism. In countries like India, adopting automation could initially lead to reduced production due to low capital-to-labor ratios. This may cause economic disruptions and social unrest in the short term. However, the transformative potential of AGI could help mitigate these challenges by lowering costs, improving healthcare to reduce mortality rates, enhancing education quality, and more.
Assuming that AI can handle any task, including complex cognitive work, creativity, and interpersonal interaction (which I believe is likely), the pace of AI development and the cost of the capital and energy needed to operate AI systems would be crucial factors for low- and middle-income countries (LMICs). For lower income countries with limited capital, the path to automation might still be disruptive in the short run, as the Benzell et al. study considering India above suggests. They would likely face transitional costs in the form of displaced human workers and the need to invest heavily in the capital needed for an AI-driven economy. This is where FDI and forms of international assistance may be important. In the long run however, all countries could potentially achieve much higher levels of output and productivity growth, as the constraints of human labor would be fully eliminated. The key challenges would, as per the first part of this essay, be managing the societal transitions, ensuring adequate capital investment, and distributing the gains from an AI-driven economy.
Implementing policies like UBI may be challenging for LMICs to finance, given their limited resources and the potential for further weakening the labor supply. Of course, AGI could create enough abundance that some of the fruits of AGI deployment could be invested in the Global South to facilitate a transition, including FDI, development aid, and direct cash transfers - but this will depend on how much growth AGI does indeed create (highlighting the importance of not over-regulating the technology), and designing the right mechanisms for this wealth transfer in a period that will likely be fairly turbulent already. AGI could also make goods and services much cheaper making it easier to finance.
There are also other complicating factors at play: human rights and foreign policy. There is a risk that authoritarian and aggressive states could misuse AGI to enhance their repressive capabilities, expand surveillance, fuel aggression, and entrench their authoritarian rule. As such commercial access could be offered in tiers, and even subsidized heavily, to like-minded/democratic lower or middle income countries. Access to more dangerous or dual use capabilities for example could be conditional on local laws and reforms like free elections, rule of law, anti-corruption enforcement, cyber-security measures and so on. For example the U.S.'s Millennium Challenge Corporation provides aid to countries that perform relatively well on indicators of good governance, economic freedom, and investments in their citizens. But this won’t be an easy transition, and competitive dynamics with China will possibly create a global split.
All this to say that the deployment of AGI systems in LMICs has important foreign policy and human rights implications; and on an economic level, may have even more pronounced short term impacts given the paucity of welfare provisions in these countries. Naturally none of this is certain or evident, and predictions are very difficult; certainly more research on such questions would be hugely beneficial.
Question for researchers: What are the most effective policy interventions to mitigate the disruptive effects of rapid technological change on labor markets, while avoiding harmful protectionism? What is the economic state of play and possible/likely futures?