I agree with the logic that agents will ultimately replace human workers, assuming all the accuracy and reliability hurdles are overcome. It strikes me as incredibly naive when certain AI influencers and tech CEOs talk about how AI will augment human workers when we know very well that augmentation is not the goal. I think of it the same way I do autonomous cars -- self driving will never be successfully integrated with the same infrastructure and systems used by human drivers. Made-for-human roads are too chaotic and human drivers are too unpredictable: the only future for self-driving is dedicated infrastructure and for all vehicles to be self-driving, not a wild west mix of the old and new paradigms. Which means, going back to knowledge work, a future where the AI agents have distilled and perfected the work and filtered out and obviated the need for all that messy human interaction. That's the only logical outcome if we stay on this trajectory. But I have to ask, to what end? When every corporation is simply one CEO administering over an orchestration of soulless autobots, when even the CEO can logically be usurped by their own agentic system, what will be the point of even running a business? It'll be like optimizing knowledge work to a degree where it's no longer an endevour of creativity, problem solving and market prowess -- it'll be more akin to bitcoin mining, the direct extraction of profit from the ether by following preset rules and the fluctuations of the macro environment. Every white collar company will merely be an idea that instantly profits or fails with no human effort. You'll be able to start a company on Monday and close it down on Friday if it doesn't work out. It's hard to even wrap one's mind around it. Who will still be privileged enough to own such a company, and will there be any competition? What's to stop one oligarch with the biggest pile of GPU's from operating enough agentic companies to cater to every single market need with zero competitors? People working in big tech should be considering these things more and not just nihilistically inventing society-ending technologies just because they can!
Great comment, thank you! I think the answer to 'to what end' is basically the same answer as why we humans opt for a complex economy: consumer demand, solving scientific problems, tackling disease, creating new things, improving living conditions and so on. Doing all this but much better seems net good. I think humans will be able to find meaning and do so much more in this kind of world; I'm not persuaded that the way we conceptualise work/labor today is ideal. The competition point is interesting: in principle I think an agent based economy is actually pretty robust against oligarchic behaviour - but probably not automatically. You'll also need laws to change and adapt in parallel, in order to redefine market power metrics and competitive behaviour in light of all these dynamics. Agree this is neglected, though I expect we'll see a lot more of this kind of work once we actually start seeing agents deployed!
Why would an agent based economy be robust against oligarchic behaviour? If anything the incentives are net towards oligarchy, especially if oligarchic powers can adapt and coordinate faster than government and shape societal views and values. While humans can surely find meaning in agent based societies, again the incentive system seems to prefer dopamine based lives for humans.
Curious if you can expand on why incentives ultimately go towards oligarchy? What is it about agents that trend this way? The challenges of governments expanding with technology are true of course, but arguably true of many technologies - less clear what it is specific about agents in this reasoningt?
Not talking specifically about governments, mainly corporations. The various blends of oligarchies and ochlocracies which characterize many of the societies of this century seem prone to concentration of actual power (cultural, economic and political) and ultimately AI agents would completely disentangle oligarchs from the rest of society, unless we manage to have strong governance systems and informed populations. Of course political parties and media companies have all the incentive to help.
I agree with the logic that agents will ultimately replace human workers, assuming all the accuracy and reliability hurdles are overcome. It strikes me as incredibly naive when certain AI influencers and tech CEOs talk about how AI will augment human workers when we know very well that augmentation is not the goal. I think of it the same way I do autonomous cars -- self driving will never be successfully integrated with the same infrastructure and systems used by human drivers. Made-for-human roads are too chaotic and human drivers are too unpredictable: the only future for self-driving is dedicated infrastructure and for all vehicles to be self-driving, not a wild west mix of the old and new paradigms. Which means, going back to knowledge work, a future where the AI agents have distilled and perfected the work and filtered out and obviated the need for all that messy human interaction. That's the only logical outcome if we stay on this trajectory. But I have to ask, to what end? When every corporation is simply one CEO administering over an orchestration of soulless autobots, when even the CEO can logically be usurped by their own agentic system, what will be the point of even running a business? It'll be like optimizing knowledge work to a degree where it's no longer an endevour of creativity, problem solving and market prowess -- it'll be more akin to bitcoin mining, the direct extraction of profit from the ether by following preset rules and the fluctuations of the macro environment. Every white collar company will merely be an idea that instantly profits or fails with no human effort. You'll be able to start a company on Monday and close it down on Friday if it doesn't work out. It's hard to even wrap one's mind around it. Who will still be privileged enough to own such a company, and will there be any competition? What's to stop one oligarch with the biggest pile of GPU's from operating enough agentic companies to cater to every single market need with zero competitors? People working in big tech should be considering these things more and not just nihilistically inventing society-ending technologies just because they can!
Great comment, thank you! I think the answer to 'to what end' is basically the same answer as why we humans opt for a complex economy: consumer demand, solving scientific problems, tackling disease, creating new things, improving living conditions and so on. Doing all this but much better seems net good. I think humans will be able to find meaning and do so much more in this kind of world; I'm not persuaded that the way we conceptualise work/labor today is ideal. The competition point is interesting: in principle I think an agent based economy is actually pretty robust against oligarchic behaviour - but probably not automatically. You'll also need laws to change and adapt in parallel, in order to redefine market power metrics and competitive behaviour in light of all these dynamics. Agree this is neglected, though I expect we'll see a lot more of this kind of work once we actually start seeing agents deployed!
Why would an agent based economy be robust against oligarchic behaviour? If anything the incentives are net towards oligarchy, especially if oligarchic powers can adapt and coordinate faster than government and shape societal views and values. While humans can surely find meaning in agent based societies, again the incentive system seems to prefer dopamine based lives for humans.
Curious if you can expand on why incentives ultimately go towards oligarchy? What is it about agents that trend this way? The challenges of governments expanding with technology are true of course, but arguably true of many technologies - less clear what it is specific about agents in this reasoningt?
For musings on trade-offs between dopamine hits and longer-term contentment (albeit less specific to meaning), please see our new piece on AI & behaviour change. https://www.aipolicyperspectives.com/p/ai-and-behaviour-change?utm_source=activity_item
Not talking specifically about governments, mainly corporations. The various blends of oligarchies and ochlocracies which characterize many of the societies of this century seem prone to concentration of actual power (cultural, economic and political) and ultimately AI agents would completely disentangle oligarchs from the rest of society, unless we manage to have strong governance systems and informed populations. Of course political parties and media companies have all the incentive to help.
I read the piece, very informative and inspiring!
Imho its hard to turn tacit into codified knowledge