6 Comments
User's avatar
Daniel Kokotajlo's avatar

"So while capabilities may arrive relatively soon, I expect a lag before we see widespread transformative impact from them. Intuitively, it might seem that AGI deployment would be relatively quick - we are already using pre-AGI systems, and so we have existing infrastructure to draw from. However, this underestimates real-world frictions, such as the slow grind of human and corporate inefficiency and the difficulty in achieving product-market fit for truly useful applications."

Quantitatively how much lag are you expecting? AI-2027.com depicts something like a one-year lag between superintelligence-in-a-datacenter and self-sustaining-robot-economy, and maybe an additional year or two before the robot economy has grown big enough to fill up the undeveloped regions of the world and spill over into human-occupied areas. I can't tell from the above if you are agreeing or disagreeing with AI 2027, though I suspect you are disagreeing, but I can't tell quantitatively how much.

Expand full comment
Séb Krier's avatar

I typically find these quantitative predictions pretty difficult and hard to specify well, hence why I avoided them. But here's an attempt: if we truly achieve the kind of "superintelligence-in-a-datacenter" in 2027 – meaning an AI agentic enough, and capable enough across cognitive and practical domains (like physical system design, logistics, overcoming regulatory/social hurdles, rapid product iteration for physical goods/services) – then a lag of 2-3 years to see the beginnings of a "self-sustaining robot economy" feels quite plausible, maybe slightly optimistic.

My main hesitation is about getting ASI in 2027, and about whether initial ~AGI milestones will immediately translate into overcoming the real-world frictions I mentioned. My piece leans towards these frictions being more significant hurdles *initially* (but over time they should decrease). My baseline expectation for the lag between achieving ~AGI capabilities (likely before 2030) and seeing a truly widespread, transformative impact akin to a self-sustaining robot economy significantly reshaping the physical economy might be closer to 3-7 years.

I think AI 2027 depicts a more abrupt, AI-driven rapid build-out once superintelligence is unlocked; whereas my view sees a more continuous (though still rapid!) ramp-up. So I suspect we might disagree slightly on the sharpness of the takeoff post-superintelligence. The difference is probably dependent on how much weight we give to non-cognitive bottlenecks. Does this make sense?

Expand full comment
Daniel Kokotajlo's avatar

3-7 years is not crazy, though I think it is very unlikely. Kudos to you for putting numbers on it. It helps to assess the degree to which we disagree and the degree to which out disagreement matters.

I think that with a ww2-style effort, *humans* could overhaul the economy to produce quite different kinds of things that we mostly know how to produce in 3-7 years. An army of ASIs would quickly learn how to produce better things, and would be faster at overhauling the economy as well. How much better, how much faster? Unclear but, well, we have already written our guess. ;)

Expand full comment
Bob Spence's avatar

Some quick thoughts:

It seems that we will evolve a hierarchy of agency in deployed AIs. It will be safer and cost effective to avoid deploying them with more agency than required for their specific application.

We register the identities and parentage of humans, and free agent AIs might need registered identities and legacy, as well. Anonymous AGI agency is dangerous.

There will be a need to deploy a system of governance to bound the agency of their deployment. For instance, systems could be bound by a governance of AGIs by a system of governance by humans and AGIs modeled after the US democracy with legislature, executive and justice domains to implement legal frameworks and their enforcement. Policies and enforcement of this governance could be implemented with emerging decisions and actions applicable to issues of AGIs, but reducible to human timeframes and understanding.

Expand full comment
Roman Leventov's avatar

Do you see any significant difference between the following two types of the governance systems?

(1) Those that may be fostered by the market to oversee public companies, such as a blend of quadratic voting by shareholders and Leike's simulated deliberative democracy (https://aligned.substack.com/p/a-proposal-for-importing-societys-values), or even more mundanely - to govern departments that have diverse internal and external stakeholders within companies with mostly AI agents doing the work (and thus "internal stakeholders" are mostly AI agents, too).

(2) Governance systems for doing more "traditional" politics and/or commoning.

Or perhaps, that this distinction will not make much sense in the future economic and institutional landscape?

Depending on whether there is significant difference, these governance systems will contribute to "market safety" or to "non-market safety", in Michael Nielsen's terms (https://michaelnotebook.com/optimism/).

If there IS a significant difference between corporate and political or communal governance, I expect that societal atomization is so high (and will only grow higher from now) that, paired with the inhibiting effect of the existing decaying political institutions (by the mere fact of their existence and lingering legitimacy, as well as actions of the humans who still have their "entrenched interests" in these institutions, as you pointed out), that effective replacements won't be developed (or even if they will by some rare enthusiasts, won't gain sufficient traction and influence) by the time that existing institutions will finally collapse, or AI automation of the economy will be in full swing, whatever happens first.

Expand full comment
Roman Leventov's avatar

I'm similarly unsure about whether your idea of "new Library of Alexandria of knowledge" is a market or non-market safety thing, and my optimism as to whether it will be developed similarly hinges on that.

Expand full comment