An agents economy
How quickly might we integrate increasingly powerful AI agents into the workforce?
This essay is written by Seb Krier, who works on the Public Policy Team at Google DeepMind. Like all the pieces you read here, it is written in a personal capacity. The goal of this essay is to explore the potential long-term integration of AI agents into the workforce, examining the challenges and changes organizations will face as AI agents become increasingly capable and potentially replace human employees.
Austin Vernon wrote a great piece on the integration of AI agents into the workforce. I agree with much of his perspective, but want to imagine the likely changes and challenges over a longer timeframe. For this essay, I’m not considering AI safety implications, and I’m assuming agents are broadly directable/aligned (like language models we use today) and roughly as capable as humans - though with some limits at first. This piece is exploratory, looking at plausible dynamics rather than making hard predictions; it’s very possible that in a year or two I will have updated my views significantly. A key question is how factors like the persistence of human value in certain contexts, regulatory responses, and social factors created will shape the path toward greater automation.
The challenge of building and deploying a non-generic, practical, and useful AI agent
With regards to Vernon’s piece, I agree that wikis and similar tools—which document and explain the nature of work and job functions—will be key to enabling AI agents to be productive in the workforce. But there are certain factors beyond cost and coordination constrain the extent to which we can fully codify essential knowhow.
The first challenge is that codifying knowledge isn't easy. In middle management roles, for example, what makes someone good isn't just the knowledge they hold. Trainee lawyers learn this early: knowing the law is expected, but success often hinges on social practices, taste, judgement, proactivity, billable hours, modelling other actors, and managing conflicting information.
Similarly, employees in most commercial organisations hold critical “grey” or institutional knowledge. This includes insights gained from informal sources, like podcasts, or a nuanced understanding of workplace politics. For instance, knowing how to navigate internal politics at work or recognizing which tasks are worth prioritizing under shifting external circumstances (e.g. political changes) is rarely written down. You won’t find an internal wiki entry that says “Avoid asking this person about X because they’re biased against it” or “The new minister hates automated vehicles, so highlight healthcare topics at the next event.” In human-agent workflows, this kind of contextual knowledge and knowhow gives humans a certain advantage, and presents a significant challenge to overcome before organizations can transition to agent-only companies.
More cynically, employees may actively withhold institutional knowledge as a form of job security. This challenge is solvable - agents could infer and learn quirks over time if management provides enough access to contextual data. For instance, an agent might eventually learn, "This is the quirk to remember when submitting a finance request." However, this process will be slow and uneven, especially for roles requiring physical or social interactions. While online customer support jobs may adapt quickly, more complex roles will take longer to automate effectively.
Michel Berry, in Une technologie invisible, highlights another issue: many organizational instruments or routines act as “invisible technologies” - structural mechanisms shaping day-to-day decisions beyond explicit policy. If AI agents replicate and reinforce these routines uncritically, they risk embedding outdated principles long after their original purpose is forgotten. This underscores the importance of revising these norms and processes alongside the deployment of agents. In other words, organizations risk inadvertently locking in past inefficiencies, even as agents upgrade capabilities.
The need for better organisational and technological infrastructure
In principle, all of this seems feasible for agents from a capabilities perspective; the real challenge lies in ensuring a fair playing field to compare them against humans. As Austin notes, agents need context - a pipeline to infer, store, and retrieve the relevant information at the right time. You can't simply ‘plug and play’ an agent into a role and expect it to figure everything out; significant organisational changes are required to enable the use of these pipelines and agents effectively. At a minimum, this involves gradually replacing legacy IT systems and infrastructure, a process that, as most CTOs will attest, is both lengthy and tedious. In some cases, it may also require restructuring teams or reducing staff to address principal-agent problems and streamline organisational structures.
I anticipate that agents not only 'augment' employees but also observe and learn from them. For an agent to be truly useful and personalised, it must understand the employee’s work, goals, and style in detail. Agents may even crystallize insights and biases that employees themselves overlooked, enabling them to perform better over time. Much like a new hire learning internal dynamics, these agents would grow in capability - but with the advantage that their insights could be shared instantly across all other agents. An employee’s mistake and subsequent correction could improve the entire network, not just the individual agent. Eventually, most agents' queries to humans might focus on preference (e.g., “Which colour?”) or on information they lack the ability or authority to access (e.g., “What did the judge say at the trial?”).
However, this introduces complications, particularly around privacy and data sharing. What kind of data can an employee’s agent communicate to others? Should agents infer insights from personal chat logs? These questions will create thorny disputes that many companies may prefer to avoid. Instead, we may see a shift toward dramatic functional outsourcing, where legacy systems are discarded in favor of contracts with newer AI providers that deliver better performance at a lower cost. As I’ll explore later, startups and organisations (including in the world of research) are often better equipped to start fresh with optimal setups, allowing them to take on tasks for larger, more rigid organisations.
Another challenge lies in how decision-making and task automation might reshape organisational culture. AI systems might tend to optimize for specific metrics, which could lead to a neglect of complex, ground-level realities. While competitive pressures between firms may address some of these issues over time, Berry’s work reminds us that entrenched management instruments can persist even in competitive environments.
This transformation will happen gradually. As AI-human hybrid organizations evolve, new forms of tacit knowledge will emerge - focused on effectively prompting, directing, and coordinating AI systems. While this will create a temporary need for human expertise (the “prompt engineers” of the future), the growing capabilities and utility of agents might ultimately reduce the demand for human workers.
What happens when human quirks and tacit knowledge are accounted for?
At this point, organisations could achieve agents that are quasi-substitutable for human employees, providing almost equivalent value. This might happen either because an organisation has reinvented itself to gradually shape agents capable of understanding and absorbing tacit knowledge as effectively as humans, or because the task has been outsourced to an ‘AI-first’ start-up unencumbered by operating with fewer legacy constraints.
But why is this institutional/tacit knowledge required in the first place? In many cases, its importance stems from inefficiencies in human-dominated systems. For example, understanding a colleague’s subtle preferences or navigating office politics becomes necessary because humans can be irrational, misaligned with organisational goals, and/or biased. If AI agents were to replace these human colleagues, much of this institutional knowledge would become irrelevant. You wouldn't need to account for Sally from Finance's particular communication preferences or office politics - AI agents would interact rationally and efficiently with each other. While AI agents might initially augment human employees by learning their know-how, this expertise in managing human quirks would diminish in relevance as more workplace interactions shift to being agent-to-agent.
We are not rational agents - but agents can be designed to be. Over time, as agents take on more and interact primarily with each other, the know-how derived from human quirks will lose its value. Organisations will adapt to these changes by restructuring to better align with agents’ needs. Rather than accommodating Sally in Finance’s arbitrary preferences, it will become more economical to replace the role entirely with an agent.
However, as Aghion, Jones, and Jones suggest, growth in organisations and the wider economy may be constrained not by what tasks can be automated, but by tasks that remain resistant to improvement - a phenomenon akin to Baumol’s ‘cost disease.’ This is reminiscent of how, historically, even as manufacturing productivity soared, sectors reliant on interpersonal dynamics or nuanced judgment lagged behind, driving up costs and complicating efficiency gains. A key question is whether AI agents can handle roles requiring rich interpersonal or social judgment, as well as those with physical demands. Even seemingly straightforward tasks can be surprisingly difficult to automate (e.g. planning a party), whether because no one codified the obvious (e.g., retrieving a physical file) or because the job relies on informal “corner-cutting” that keeps large organisations running smoothly. Employees often perform small favours, trade concessions, or bend procedures to avoid deadlocks, relying on trust or rapport. An agent might follow protocol rigidly, running into red tape where a human would find a workaround. Similarly, senior-level tasks such as forging alliances or interpreting political signals involve trust dynamics that agents may struggle to navigate. These “last 5%” edge cases could delay or complicate the transition to agents unless organisations redesign processes or equip agents with ways to handle the flexible, often invisible rules of human collaboration.
Similarly, stubborn ‘Baumol-like’ constraints may arise from physical tasks and roles reliant on interpersonal relationships, where robotics lags behind cognitive automation. Though these frictions may slow adoption in some areas, it’s unclear they will be strong enough to derail the shift toward AI-first processes. Over time, as the cost-performance ratio of robotics improves, AI-driven advancements may also apply to physical systems. Once robotic platforms become as flexible as AI software, the residual Baumol-effect on labour-based tasks could diminish (for example, physical interactions will be less of a constraint). Just as agent-first startups have leapfrogged legacy enterprises in software automation, the next wave of agile robotics could disrupt entire industries currently shielded by physical complexity.
Why restructurings and outsourcing will be necessary
Restructurings and outsourcing will be necessary for two reasons. First, restructuring is required to integrate agents into company workflows seamlessly, ensuring they gradually learn and understand the grey institutional knowledge necessary for effective collaboration. Second, as agents take on more tasks and responsibilities, organisations will need to adapt workflows and processes to maximise the productivity efficiencies these agents can offer—efficiencies often limited by the quirks, biases, and inefficiencies of remaining human workers.
These efficiencies are significant, not arbitrary. One often-overlooked aspect of modern organisations and bureaucracies is the extent to which middle management can be ‘misaligned’ with the company’s interests. Managers are sometimes incentivised to make decisions that appear better - safer, or more appealing - to those evaluating their performance, even if those decisions aren’t in the organisation’s best interest. "I'm more likely to get a promotion if I do X, even if X isn't really needed or as impactful as Y". Similarly, employees may disagree with the organisation’s broader goals, creating further friction. As organisations scale, it becomes increasingly difficult for managers to oversee employees and for directors to oversee managers. Highly effective agents, capable of performing many tasks in parallel, offer an opportunity to simplify internal hierarchies and address these principal-agent problems.
If agents merely augmented employees - following their instructions without question - misalignments would persist. For example, the agent might obediently hire PwC to produce an unnecessary and costly PowerPoint presentation, or turn a blind eye as an employee fakes a sick day or pretends to be busy. Addressing this misalignment can be approached in two ways. The first is aligning the human employees more closely with organisational goals, which would likely require extensive surveillance - a method that would be resisted, demotivating, and unpopular. The second is to reduce human involvement altogether, making the agents accountable directly to the director.
But then how useful is the intermediary human really? Under the above proposed model, the director delegates tasks directly to the agents, bypassing the need for human intermediaries. If taste, curation, and tacit knowledge are no longer where humans outperform agents, the rationale for keeping these intermediary roles diminishes. Delegating directly to agents (who benefit from sufficient context and capabilities) creates a cleaner, more efficient chain of command and reduces the risk of misalignment. I expect these restructurings will gradually decrease the involvement of human employees over time.
Some important caveats
Throughout this transformation, wider dynamics will likely slow these changes. Many people - particularly in high-skill, academic, and white-collar jobs - derive meaning from work and possess power to delay or block changes through strikes, unionisation, or negative publicity. Governments may also require reasonable human oversight in certain contexts, such as healthcare, justice, and critical infrastructure, for liability and safety reasons. In many services, including education and hospitality, the value of human interaction itself cannot be overlooked. These frictional roadblocks and institutional inertia could extend the human-to-agent transition “a few years” into “a decade or more,” depending on industry. Economists have long noted that large firms often resist adopting efficiency-enhancing measures that disrupt entrenched interests or managerial structures.
However, these factors alone may not be sufficient to halt the wider competitive forces driving automation. First, rising human labour costs increase the incentive to automate, as observed in France following the 2000 introduction of the 35-hour workweek. Second, these technologies will deliver impressive economic and other benefits that will be hard to ignore. Third, even maintaining some degree of human oversight doesn’t fully counteract these dynamics; AI allows fewer humans to accomplish far more. Put differently, organisations can still massively downsize their workforces despite HITL (human-in-the-loop) requirements.
Another critical consideration is the democratisation of coding and building, which will empower individuals and start-ups to adopt agents rapidly. Large restructurings within corporate monoliths are often met with significant resistance. By contrast, smaller start-ups - unencumbered by legacy systems - are better positioned to produce high-quality goods and services quickly by embracing agents from the outset. They avoid “Sally or Jake” problems altogether by not hiring them in the first place.
Smaller startups, leveraging minimal staff and maximum automation, could become showcases for more “rational” ways of operating. While some tasks, particularly those requiring deep human interaction, may resist automation temporarily (a microcosm of Baumol’s cost disease), the overall trend is clear: process-oriented and knowledge tasks will be increasingly handed off to agents. By minimising the accumulation of human employees and their inefficiencies, start-ups can operate more effectively and efficiently. Consider Palantir, which serves major enterprises with far fewer employees than its competitors. Successful start-ups following this model may force larger companies to adapt through competition or acquire these innovators, accelerating the transition to automated operations.
Conclusion
If we build the infrastructure to enable AI agents to learn tacit knowhow and integrate seamlessly into our systems, the future points toward leaner, more efficient organizations where agents progressively replace human roles. This essay illustrates that if we assume (a) low costs and parallelization of mostly aligned agents; (b) human-like or stronger capabilities that are easier to control; (c) seamless integration into a company’s technical and managerial infrastructure; and (d) a pipeline for absorbing and learning tacit knowledge from humans, then the trajectory tends towards giving agents more production responsibilities, and removing these from humans. Augmentation may work for a while, but it doesn't seem sustainable in the long run as employees lose their know-how, and the agent takes on more tasks. However, it’s worth cautioning that the numerous ‘if’s involved - many of these outcomes are difficult to predict and account for in advance. It would be worthwhile to explore the complexities that could arise if agents were misaligned, harder to control, or exhibited disparities in capabilities.
Interestingly, the same infrastructure needed for human–agent augmentation is also what allows organisations to reduce reliance on humans entirely. In cases where a company is risk-averse or constrained by legacy contracts, this could mean outsourcing tasks and responsibilities to a smaller, leaner part of the organisation or a separate agent-first start-up - a form of internal cannibalisation. From the perspective of shareholders, and arguably mainstream economic theory, this approach makes sense. This trend will not affect lower-level employees or middle management; directors, too, face similar challenges. Over time, even their roles may diminish, leaving the CEO to manage a multi-agent system (essentially, a company of agents) with maybe a few other humans and optimised for shareholder benefit. This would result in a highly ‘rationalised’ company with minimal principal-agent frictions: no strikes, no rest, no weekends, nothing.
The pace and magnitude of this shift are debatable. Still, I believe a substantial reorientation toward AI-driven operations, culminating in near-full automation of many roles, is plausible in the long term if AI capabilities continue to advance, adoption costs decline, and regulations remain permissive. In some interpersonal or legally sensitive roles, Baumol-like frictions may slow adoption, but are unlikely to halt it entirely. As Berry warns, invisible technologies that shape daily corporate life could embed old inefficiencies in new forms, necessitating organisational overhauls alongside agent deployments. Accelerating factors, such as groundbreaking discoveries or dramatic productivity gains, could further incentivize these shifts. For instance, curing diseases or rapidly improving productivity may create strong pressures to maintain progress and avoid unnecessary delays. Rising shareholder value and broader economic growth would also hasten these transitions.
Ultimately, this trajectory is desirable to the extent that it boosts productivity, reduces costs, alleviates poverty, cures diseases, and fosters abundance. While individuals like Sally from Finance or Jake from Policy are valuable human beings, their immediate interests may not outweigh the benefits experienced by a broader population through improved economic conditions, health, and life opportunities. The challenge will be ensuring that people like Sally and Jake transition into positive, fulfilling lives after job displacement. The promise of the future must consider them too, rather than relegating them to the ranks of unfortunate externalities. Firms, policymakers, and society must address these transitional frictions, rethinking training, labour market protections, and the invisible structures that guide daily decisions. Certain functions relying on interpersonal relationships will continue to require human-driven interactions where trust, rapport, and informal negotiations matter. This will be an important challenge of the next decade: rethinking labour, its role in our lives, and its connection to a meaningful existence. Critical questions about finding meaning in work and managing the acute challenges of job displacement have been deliberately avoided here and will be explored in a future essay.
—
Many thanks to the following people for comments: Conor Griffin, Gustavs Zilgalvis, Julian Jacobs, David Wolinsky, Ben Lepine, and @PITTI_DATA.
I agree with the logic that agents will ultimately replace human workers, assuming all the accuracy and reliability hurdles are overcome. It strikes me as incredibly naive when certain AI influencers and tech CEOs talk about how AI will augment human workers when we know very well that augmentation is not the goal. I think of it the same way I do autonomous cars -- self driving will never be successfully integrated with the same infrastructure and systems used by human drivers. Made-for-human roads are too chaotic and human drivers are too unpredictable: the only future for self-driving is dedicated infrastructure and for all vehicles to be self-driving, not a wild west mix of the old and new paradigms. Which means, going back to knowledge work, a future where the AI agents have distilled and perfected the work and filtered out and obviated the need for all that messy human interaction. That's the only logical outcome if we stay on this trajectory. But I have to ask, to what end? When every corporation is simply one CEO administering over an orchestration of soulless autobots, when even the CEO can logically be usurped by their own agentic system, what will be the point of even running a business? It'll be like optimizing knowledge work to a degree where it's no longer an endevour of creativity, problem solving and market prowess -- it'll be more akin to bitcoin mining, the direct extraction of profit from the ether by following preset rules and the fluctuations of the macro environment. Every white collar company will merely be an idea that instantly profits or fails with no human effort. You'll be able to start a company on Monday and close it down on Friday if it doesn't work out. It's hard to even wrap one's mind around it. Who will still be privileged enough to own such a company, and will there be any competition? What's to stop one oligarch with the biggest pile of GPU's from operating enough agentic companies to cater to every single market need with zero competitors? People working in big tech should be considering these things more and not just nihilistically inventing society-ending technologies just because they can!