Discussion about this post

User's avatar
Daniel Kokotajlo's avatar

"So while capabilities may arrive relatively soon, I expect a lag before we see widespread transformative impact from them. Intuitively, it might seem that AGI deployment would be relatively quick - we are already using pre-AGI systems, and so we have existing infrastructure to draw from. However, this underestimates real-world frictions, such as the slow grind of human and corporate inefficiency and the difficulty in achieving product-market fit for truly useful applications."

Quantitatively how much lag are you expecting? AI-2027.com depicts something like a one-year lag between superintelligence-in-a-datacenter and self-sustaining-robot-economy, and maybe an additional year or two before the robot economy has grown big enough to fill up the undeveloped regions of the world and spill over into human-occupied areas. I can't tell from the above if you are agreeing or disagreeing with AI 2027, though I suspect you are disagreeing, but I can't tell quantitatively how much.

Expand full comment
Bob Spence's avatar

Some quick thoughts:

It seems that we will evolve a hierarchy of agency in deployed AIs. It will be safer and cost effective to avoid deploying them with more agency than required for their specific application.

We register the identities and parentage of humans, and free agent AIs might need registered identities and legacy, as well. Anonymous AGI agency is dangerous.

There will be a need to deploy a system of governance to bound the agency of their deployment. For instance, systems could be bound by a governance of AGIs by a system of governance by humans and AGIs modeled after the US democracy with legislature, executive and justice domains to implement legal frameworks and their enforcement. Policies and enforcement of this governance could be implemented with emerging decisions and actions applicable to issues of AGIs, but reducible to human timeframes and understanding.

Expand full comment
4 more comments...

No posts