0:00
/
0:00
Transcript

A discussion with Tyler Cowen

How Tyler's views on AI have evolved over the past two years
3
8

Tyler Cowen, Professor of Economics at George Mason University, Faculty Director of the Mercatus Center, writer at Marginal Revolution, and host of Conversations with Tyler, gave a talk on AI and economic growth, at Google DeepMind’s London offices, in early July. Tyler discussed how his views on AI have updated in recent years, across many areas, including how to best measure progress, the current market landscape, and how AI may impact living standards. We ended with a Q&A with Zhengdong Wang, a researcher working on Gemini and Post-AGI Research. Thanks to Tyler for sharing his thoughts with us.

Transcript.

Zhengdong Wang: I'm very excited today to have Professor Tyler Cowen, who is a professor of economics at George Mason University and has the popular economics blog Marginal Revolution, and Conversations with Tyler. So we'll have Tyler say a few words, as long as he likes, and then have room for lots of questions at the end. Without further ado, Tyler.

Tyler Cowen: I'm coming here from a talk at 10 Downing Street on AI. And while that's off the record, I think there's a few things I can tell you about what I said. Also, I think it's within the bounds of the agreement just to report they seemed highly intelligent and informed about AI and had good attitudes and I found it a very positive experience. And they had nice things to say about DeepMind. So, that was all to the better. Mostly I'll just say what are some updates I've made over the last two years as AI has progressed. Even if you were not at the talk I gave here two years ago, I think it will make perfect sense to you. But just a few things I told them at 10 Downing Street. Obviously the question is what should we do, right?

And some very practical things I told them they should do is that the UK has an asset, healthcare data, really the best data in the world and a lot of citizen trust in the healthcare system. I told them they should do more with that. The UK can be a leader in integrating AI into business services, which is a long standing UK export strength, and a lot can be done in that direction without the UK having to build a top of the line foundation model of its own. And then also education. The UK and indeed most or all countries should restructure its education systems at all levels so that a significant portion of the education is simply teaching people how to use AI. Doesn't have to be at a technical level, and I understand full well that whatever you teach people will be obsolete for sure, two years later, possibly two months later.

But the real thing you're teaching them is simply that they have to learn how to use the thing, and you're making that a big part of the curriculum and just impressing on them, this is a thing to be learned like reading and writing. And that was something they could do. Those were three things I told them they should do. There's a bunch of other things I told them they should do, but I feel they can't do, like make energy much cheaper or maybe they can do it, but they're not willing to do it. So if you're curious what happened there, that would be my brief report. It was pretty good. I had a great audience and I enjoyed doing it.

Any AI talk, you don't always know what you're going to meet. There's some worry you've got to show up and just, like, shake people, right? Like, it's happening. I didn't have to do that. So that was, you know, the biggest learning on my end. If I go to an audience in New York City in the US, we used to think New York City was our most sophisticated audience. What you have to do there is to shake people. I was at an event in New York City. I guess it was November, not about AI, but there were five of us in the green room, the moderator, the organizers, two people on the panel, all well-known people. And I used the phrase AGI. Not one of the five knew what AGI was. Like I don't know what it is either in a deep way, but they didn't even know superficially what it might have been. And that's stunning. That's when you have to shake people. But at Downing Street, I didn't have to shake people.

Now, let me just say some of the things I've changed my mind on in the last two years. I don't know if any of these are major. First, the rate of progress in AI quality, I think, is roughly what I expected. But the thing I have a much finer sense of is biases in our measures of quality using benchmarks. So most of you, I suspect, spend a lot of time with benchmarks. And while of course, that's what you should be doing, I think it introduces some significant biases in how you measure progress.

So from my point of view, if you work with benchmarks, you'll think the rate of immediate future progress will be really quite high because on purpose you're choosing benchmarks that the current systems can't handle very well. And if you're choosing questions for, you know, an exam for your AI, if it just aces the question, it's considered not a useful question, you throw it out, you try to get in more questions it can't handle. So in the period to come, it will look like, oh, it's doing better and better on the questions it couldn't answer. My point of view is different. I'm what you'd call a normal human being.

I've even said, half in jest, but half meaning it, that we have AGI already. So on questions that actual humans ask, I think the rate of immediate future progress will be quite low. But the rate of immediate future progress will be quite low because the rate of past immediate progress has been so high. So a lot of the questions, you're just not going to get much better answers. So you ask it now, how many r's are in strawberry? It tells you three. That answer isn't going to get any better. It's three. So the rate of progress there is zero. The rate of recent progress, you could say is infinite, moving from two to three as the answer is a pretty big gain.

If you ask one of the better LLMs, what's the meaning of life? It's kind of a stupid question, right? I still find, say the three best models will give you a better answer than humans will give you, which is impressive. And I also think five years from now, the answer won't be that much better. I don't have this Gary Marcus like view that the AI is all screwed up. I just think the answer only gets so good to the question, what is the meaning of life, because it's a bit of a stupid question. What is the meaning of life? So it's close to a perfect answer now. And I didn't feel that, say, the GPT-4 answer to what is the meaning of life was great. It was okay, but not better than what I could have written. Now, it's better than what I can write.

So AI researchers have this bias toward looking for future progress, but the actual basket of information consumption estimates of what progress has been is that on most things real humans care about, I think we're at AGI.

Let me give you another example. Maybe some of you work on this, but probably none of you do, and that's microeconomic reasoning. So if you don't know, I'm a PhD economist. Obviously, I've worked a great deal on microeconomic reasoning. I've written a textbook on microeconomic reasoning, and published a lot of articles. It's now clearly at the point where Gemini or o3 or Claude, they would beat me in a competition. I don't bother running it because I know they would win. It would humiliate me. But they win.

So that's a kind of AGI. There's different definitions of AGI. A lot of people now switch it to “it can do any job.” We clearly don't have that. “It can do any job behind a computer,” “it can do any job that involves processing information.” We don't have any of those things. But one of the older definitions was that on at least 90% of tasks, it beats the experts. And I think for that, not on what many of you are measuring, you know, the future progress on math Olympiad problems, but just for what people actually want to know, you could say we have AGI and it's not a crazy claim.

So I think again, we're underestimating progress over the last two years, but we're overestimating what it will be for the next two years because we don't realize that we've maxed out on so many different dimensions. And when o3 or or Gemini, you know, beats me on microeconomic reasoning, I look at those answers and my reaction is it's not going to get much better. It'll get somewhat better. But it's like the number of r’s in strawberry. That we know is not going to get better.

If you ask the top models, why was economic growth in Mexico slow in the 19th century, you get very good answers from at least three models. And again, that will get a little better, but we're sort of maxed out there. So my understanding of what's happening has been shaped by this measurement issue. And this corresponds in economics to, like, what's the actual consumption basket that you're using to measure progress? Say you're measuring progress over the last 25 years. If you use internet goods, the rate of progress is incredibly high. If you use the actual consumption basket, the rate of progress is much lower. So just how much the choice of basket matters for measuring progress is something I did not understand that clearly two years ago, but I feel I have a much better grasp of now. Not in a pessimistic way.

A number of other revisions I've made. Two years ago, I thought we would have a big shakeout and a bunch of AI major AI companies would go bankrupt, something like the dot-com bubble bursting, and then AI would come back in a big way and it would all take off. That's not my scenario anymore.

I think everything has become well capitalized enough, if only through options, that the major companies just will keep on going. And if you take one of the smaller companies, obviously smaller than Google, Anthropic, it's both doing well, it had a great revenue report this morning, but just the option value on buying Anthropic means the companies we all talk about will simply keep on going. I think that's good. It's certainly good for people who work in the AI sector. And that again is an update I've made over the last two years.

I'm also less likely to think that core foundation models will be commoditized. The models to me seem to be evolving in different directions and maybe will not converge as much as I had thought. So for any task I perform, I have a clearly preferred model, like computation, I would definitely prefer Gemini. Most of my actual life, I tend to prefer o3. So my wife and I were traveling in Europe, we were in Madrid for four nights and we wanted to ask: “What are all the concerts going on in Madrid that Tyler Cowen and his wife would want to go see?” o3 is like A+ for that.

You might need to give it a bit of context. Gemini is very good, but I think o3 would be my favorite. That kind of question is my most common use. And again, we're already at the point it's just not going to get that much better. Hallucinations are very low. They could still go down a bit. But say Anthropic being very good for business uses, Gemini being clearly the best for a lot of computational purposes. I'm less convinced that it's all going to converge and I think there'll be a lot of customization. So modestly away from commoditization has been an update in my view and that's related to the view that the companies that you talk about won't go bankrupt because if it's not a single same everywhere product, the price is not just competed down to marginal cost. There'll be all these boutique products that will be spinoffs of foundation models and a number of companies will just be able to make a lot of money. And good news for all of you, I think good news for the world, but that would be a change in how I've mentally modeled the sector.

Another thing that's surprised me, I would say positively, is you know, we all know OpenAI has been very commercial in a lot of its product decisions, and I feared at the time that that would hurt the quality of their top models. And I also feared at the time OpenAI for a number of reasons would not be able to keep a lead in some areas. Commercialization has proved useful as a cross subsidy for attracting talent, just for keeping the company sharp. And they have great cutting edge models, and they have some commercialized things which I don't use at all, in fact, but I think that's gone fine. And that's gone better than I thought it would have. That would be another update I have made.

The way in which current models integrate reasoning and search happened better and more quickly than I had thought. Pretty small update, but that's been my big surprise over the last, say, six to nine months. I know you all see these things more quickly than I do. But nonetheless, an update.

Grok has surprised me. It did better more quickly than I had thought. I'm waiting for Grok 4, which I think is coming out July 4th. But the notion that you can do catchup by GPUs, brute force, having your people sleep in tents, and send them to a lot of San Francisco parties and talk to others, it's sort of worked. Now for me, Grok is clearly worse than Gemini or o3 or Claude, but it's pretty good, and there's actually a few areas where I think it's the best model. So if I want to know something very recent, I will go to Grok. And that's very useful. So I do use Grok, not just to play around with it, but I actually use it, and we'll see how good Grok 4 is. But I thought it would be worse than Llama, and to me it's clearly more useful than Llama. So that would be another update I made. And I think Grok 4 will be this big moment where we see just how well is that strategy going to work.

People always are debating how much does distribution matter? Distribution, at least so far, has mattered less than I would have thought. This is a big question for Google DeepMind. Like I'm on WhatsApp all the time, all day. And at the top of my WhatsApp is some Llama AI. I tell you, I never use that. I don't care how convenient it is. Might get better with all the hirings, but it really doesn't matter. And not only am I that way, but as far as I can tell, the whole world is that way. They don't care about that distribution channel.

Google sends me all kinds of little messages which confuse me. You can now do this in your Gmail. I haven't responded positively to those. I suspect they're quite good products, or if they're not now, they will be soon. But somehow I get them at moments when I'm not ready to make that leap. So the fact that I use Gmail all the time, you would think is a big advantage for Google DeepMind AI products. In fact, I don't think it has been so far. So this to me has been a surprise, it could change. I think if it changes, you all are the ones in a position to make it change, probably more, you know, than Meta with Llama.

But simply the brand name of ChatGPT has become a word like Xerox or Google. People haven't heard of Claude or Gemini to a remarkable degree and they just call things ChatGPT. That has been very sticky. A name that was viewed as stupid, like arbitrary, has turned out to be brilliant. I think that was an accident, but it's one of the best marketing names I've heard in retrospect. And how much that is people's entry point into talking about what this is, stickier than I would have thought and actual ease of distribution, less sticky.

Something I noticed with my own use. So I'll have open windows to say Gemini, Claude, OpenAI products on my laptop, more or less all the time. But if I have a very recent query, I'll do the extra click to get to Grok. You know, I have Twitter open, but you have to click on that kind of horn symbol to get to a Grok query. And Grok is more useful. I'll actually do the extra click and not feel bad about it. And I think that's pretty stable. I think I would do two extra clicks, but again, the way that has all played out, these differences across the models which look small are more persistent and to me more important than I would have expected. So I'm still doing Grok and I don't mind the click, even though I could do other things because it just feels to me more recent on some narrow subset of questions. So like in the US, we're passing this new budget bill, it's called BBB. What's in the bill changes all the time. If I want to ask what's in that bill, I actually go to Grok and I feel it's the best for that very specific kind of query, but as someone who writes as a columnist, that kind of query is important to me pretty often.

It also surprised me that there's still not really much regulation of AI. And I would have thought we would have had something by now. The Biden executive order was not that binding to begin with. The Trump people tossed it out. You probably all know the Senate turned down this idea of a moratorium on state level regulation. So I do think we'll get a fair amount of regulation in the next 12 months from a number of states, probably New York, California lead the way, things will pass. Over time, there'll be some federal consolidation of the different state laws. But still we've got a long period of time where AI is viewed as a big thing and it's just kept on going.

And I thought a year ago we would have had something. It was not obvious to me that Trump was going to win. And yet, still no regulation. And even though that vote in the Senate was 99 to 1, it was much closer than it looked. Once people knew it was going to fail, they all wanted to vote against it, but it almost didn't fail. So the 99 to 1 number there is quite misleading. We almost walked into this regime where there just was not going to be state regulation for 10 or maybe 5 years. And that to me was also a surprise.

That Trump has seemed so committed to that view of AI. I wouldn't say it was a surprise, it didn't surprise me, but I didn't predict it. And in many, many other issues, Trump has been extremely fickle. On that issue, at least so far, we've seen no fickleness. So that's a kind of surprise, something we might not have expected. The pending deal with the UAE, that has come together so quickly, also for me, a significant update.

I'll just say in closing my formal remarks before we get to questions. I'm still a believer in slow takeoff, and I've come up with another way of framing this. I'll present just over two or three minutes. So the way you all think about takeoff is how much progress your systems are making from the point of view of someone doing tech. That's a very valuable way of looking at it. But there's another way as an economist where you can ask the simple question, how will it boost living standards? And just look at a household budget and what people spend their money on and then ask how long will it take before this stuff gets cheaper. So let me do that for two or three minutes.

You look at a household budget, this is not controversial, but people basically spend on rent, food, maybe education, maybe health care, right? So let's talk through those.

Rent. There's nothing in AI, no matter how good it is in the tests, that's going to lower your rent. In fact, it could be that AI makes living together with other smart people more valuable. Rents in the Bay Area, London, a few places could go up. But there's no simple path toward AI somehow making it easier to build homes. The binding constraint is often the law. It's not really the price of construction. So rents are not going down anytime soon. Huge part of people's budget. So the effect of AI on rent for the foreseeable future does not improve living standards, I would say.

How will AI affect the supply and price of food? Again, food, everyone has to eat. One striking feature about the literature on economics of agriculture is that you can have very simple agricultural improvements and they do not spread geographically very quickly. So you look at the US and Mexico, for the most part, US agriculture is much more productive. There's kind of a free trade zone across US and Mexico, more or less. US and Mexico are close, plenty of people in the US speak Spanish, enough people in Mexico speak English. There's a lot of reasons why you might expect a lot of spread. There's been a fair amount of spread, but also really not that much.

So you can have great new ideas that can be way simpler than what AI will give you. And the amount of time it takes for them to spread to other parts of the world can be decades or even centuries. It's not that there's no spread. But as AI gives us innovations, say something genomic that makes food production better, cheaper, more nutritious, fortified rice, whatever you think it's going to be, I think all that will happen. But the time horizon you need for it to make food truly cheaper for just a typical family in London, US, or for that matter, Mexico, I'm not sure food will really be any cheaper in the next 10 years. Again, even with a very optimistic view of AI, I'm not pushing the Gary Marcus line here, I don't agree with him at all. It just takes a long time to get ideas put into practice in agriculture. So two biggies, rent and food, we're kind of stuck.

Education, very different. This to me is super complicated. I would say we already have millions of people who are much smarter and better educated because of AI. There's nothing speculative about that. It's just self-evident. We have a lot of other people who use it to cheat, possibly they're stupider. I'm not sure. I think some of that cheating you learn from, but it's complicated in any case. And how far are we from a point where the existence of AI, say, makes 2/3 of high school students smarter and better educated? There, I genuinely don't know. I would say like there's some 5 to 10% who right now are just massively smarter and better educated. But to get to the 2/3 point, how long will it take us? I don't know, highly speculative. But it's not obvious to me that it's coming in the next few years. Educational institutions, they're in denial. Some of them hate AI. They're typically run as nonprofits. They have a lot of different constituencies that have to agree before they'll make a big change. So if that were 10 years or more, that wouldn't surprise me. Again, saying I don't know, a lot of it will spread outside of schools, of course.

It won't, I don't think anytime soon, lower how much we have to spend on education. Like the price of tuition at Harvard or a state school, it's actually fallen a bit over the last dozen years in real inflation adjusted terms. I don't know that it will fall more because of AI. So that one is a question mark. I would say extremely asymmetric distribution, but possibly longish lags before it hits most people and even then they're smarter, but they still have to pay all this tuition. You might think there's some more distant future where the AI certifies you, you don't have to pay tuition at all. That could be great. But then to me that's clearly more than 10 years off. And it's not a question of the AI not being good enough.

As I said before, on most things humans care about, the AI is already smarter than we are, and the AI being smarter on math Olympiad problems for the high schooler, it's irrelevant. You know, if I even compare like o3 and o3 Pro, I'm a PhD economist. If it got better than o3 Pro, like I'm not sure I would always notice. So we're at some frontier already, where making it better does not educate humans more. Although for all kinds of technical problems and bio, finance, trading, whatever, it'll be much, much better, more or less indefinitely. But for the humans, education, I would just say, is a big question mark.

And then there's healthcare, which I also think is quite different from rent and food. The way I would analyze that is I think AI, over some time period, I don't know, 30 years, 40 years, will basically cure everything. So I'm very optimistic about this. The work you're all doing, it's fantastic. I hope I live long enough to benefit from it. Just incredible. The Arc Institute, you know, in California. There's a lot of regulatory barriers. So for me, very little of that is a five-year thing, but definitely a lot in 20 years. Like FDA approval is typically 10 years even when something's ready. So that to me is at least a 20-year thing, but it's very real. So my vision of the AI future is say in 40 years, everyone dies of old age. So most of you in this room, you know, buckle your seatbelts, but you'll live to 97 or whatever that number is when you die of old age, and you won't have Alzheimer's for the last 14 years. That's incredible. It's a huge gain. But in terms of your living standard now, I think it basically means there'll be more treatments and more medicine. So the percentage of GDP spent on healthcare, maybe goes up to 30%, which in my view is a good thing.

The end gain is you get to live to 97 and along the way you feel much better. It's what you should be spending your money on. But for your life, you know, up to age 80, say now you can expect to live to 82, you feel somewhat better, but there's actually higher health care costs because there's more treatments. So up until age 80 or something, like your living standard is not higher. Only when you start to get really sick is your living standard higher. So for the first at least 70 years of life for most people, you don't have higher living standards from better healthcare, your rent isn't lower, food is cheaper over some horizon, but maybe not that much. And then education is this complicated thing, but obviously anyone in this room will be much smarter and better educated because of AI.

And when you put all that together, again, you can be massively optimistic about model progress. As probably many of you are, or you wouldn't be working here, and I would not disagree with, you know, the median estimate in this room of model progress. But in terms of that making a big difference for GDP, living standards, changing the world, I just think that's pretty slow and it's a tough slog. And it's not a tough slog because there's anything wrong with the models. It's just human institutions are imperfect and move very slowly. Anyway, those are some of my basic takes on things. There's plenty of time for questions, and with that, I will sit down. Thank you all for having me in.

ZW: I will take advantage of hosting and ask a couple of questions, but then we'll alternate between the room and the call. Okay, so Tyler, if not benchmarks, what is a realistic thing researchers can focus on instead?

TC: I don't think it's an instead. You should keep on doing everything you're doing with benchmarks, but have some alternate standard, create a consumption bundle benchmark of what people actually use AI for, which could be like naming the dog. We use it to diagnose our dog. The dog has a rash, we take a photo, we ask the AI what's wrong with the dog. The AI says the dog is fine. We save ourselves a trip to the vet.

I'm a relatively sophisticated user of AI, and I'm like snapping a photo of the dog's rash. So create a bundle of actual uses, you know, weighted by money importance, time importance, and see for those things what progress are you making? What's the living standard gain for the normal user? And I just think that will give you perspective and you'll see progress over the last two, three years has been unbelievable, way better than you thought. But on a lot of those fronts, it's not going to get any better.

ZW: How much will the labs or the model creators capture the gains of AI?

TC: When you say model creators, I've seen a lot of recent data and gossip about different salaries. So that's my best estimate. I don't think those will fall anytime soon. So you'll all be fine. If you mean at the equity level, I'm not sure there are huge gains from buying into those companies. Anthropic strikes me as undervalued because GPT is so focal a word with investors, they don't understand how well Anthropic is doing in the business market. That might be undervalued. Google and Meta, it's this complex bundle of stuff that is so opaque and hard to unpack. It would not surprise me if those two both did great things but lost money on the investments. I don't know how much the markets have already capitalized that. OpenAI, I think will do very well. Nvidia, I'm nervous that it's a little overpriced, but because there will be good substitutes, not that I think there's some defect in the company. So I'm very bullish on the sector, but if I had what you would call real money, I wouldn't be pushing all my chips into it. I think it'll be a mixed picture and it will do quite well. But markets are good at figuring things out and they're already at work trying to do that.

ZW: To the extent that individuals or companies can use AI to accelerate themselves, like why don't people do it more? When you say you would do one click, maybe two clicks to go to a better model, isn't that sort of not normal and most people wouldn't actually do that?

TC: Most people, you know, there's different estimates of this, what percentage of people actually use ChatGPT or something like it. I think the true estimate is actually fairly high, but it's very psychologically circumscribed. The same people in other contexts view it as a threat. They don't want to think about it too much. They don't want to have to think about policy for AI. And they don't want to have to think about two facts that will change the nature of their job a lot. And they cannot actually explain to their kids or grandkids what kind of world they'll grow up in. So there's this extreme psychological bifurcation, and I think that will prove pretty sticky. And one of my self-accepted missions is to go around and just shake people a bit and say, look, this is happening. It's fine that you use GPT to, you know, write your commencement speech, but you actually have to learn a lot more because many things will change a lot. And I've felt that the people I get to talk to, I've had good progress doing that.

But most people and being an elite does not inoculate you against this. Most people are asleep. The East Coast is way worse than the Bay Area. The Bay Area is advanced and sometimes in my view way too crazy. Countries I found vary. Europe, basically not there. I spoke to a lot of people in Madrid. Everyone said we're not there yet. It's pretty tough going. So the world will be hit by these shocks, and people are reluctant to start preparing now for psychological reasons. I think you see a bit of the same people ask, well, will China invade Taiwan? I'm not sure, but they might, but there's nothing you can do about it that's so obvious, so people just postpone it because they don't know it's a big thing that would change the world a lot, could be quite menacing. AI also, though I view it as a big net positive, the fact they'll have to learn almost an entirely new job, people typically kind of hate. If you poll AI, a lot of people will say they hate it. I don't think hate is what they actually mean, but we should take that word seriously. There's something about it they hate. It's going to be very tough. I think we're underrating the social stresses that will result.

ZW: Are you down for a quick-fire game like Overrated or Underrated, but Rising and Falling in status, say, when we're 97 because of AI? So I give you a term, and you say rising and falling.

TC: Oh, of course. Yeah, yeah.

ZW: Great. And feel free to pass. The Western Canon.

TC: The Western Canon will make a big comeback. So Hollis Robins tweeted that in one of the Star Treks, was it Captain Picard? He works with his version of AGI, and he reads old books. New books somehow don't match in an AGI era. So we already see it's like a thing on the internet. Have you read Middlemarch? Have you read the classics? Shakespeare, all that I think already is making a big comeback, that's more or less permanent. Old books, the classics will rise in status. New books will feel like they don't really fit. I think at some time horizon we'll reconfigure how new books should be written to make sense in an AGI world. But for the next 10, 20 years, I would be very short books.

ZW: The hedgehog and the fox.

TC: I think being a great hedgehog or fox is what will go up in status and being a mediocre one of either will fall in status. I worry there's a lot of people now, they come from well-to-do, well-educated families. They would expect, say, to work for McKinsey or something. McKinsey already is hiring way fewer people. They might have ended up in like the 94th percentile of the income distribution and been a high-status professional with sort of the perfect marriage they felt they wanted. And I think a lot of that will disappear. Those people are smart, conscientious, they'll do fine, but in terms of status, they won't get anything close to what they wanted. And they'll be maybe out-earned by some very good carpenters and gardeners. And they're not ready for that. Those people are not going to take up the pitchfork and kill the rich. They kind of are the rich. And they're very politically influential, they know how to organize. I don't know what they're going to do, but I fear politically that will be very ugly in a way we're not ready for. That's one of the things I worry most about AI, just the very rapid redistribution of status away from groups that can make a big stink about it.

ZW: India.

TC: Well, still underrated. It's been on the cover of Time, The Economist. so you think, well, now it must be overrated. But Indian per capita income, it's still quite low. I know the numbers are somewhat fudged, but I think it's growing at 6%. South India in particular right now has incredible talent. I don't see any reason why that 6% has to stop. Most people are not very good at thinking about compound returns, even professionals, and thus they underrate India. So I'm very long India. I don't think it will ever not be a mess. They'll never be like a big Indian Denmark, but still India will be like a much larger Mexico, and that will be amazing, and it already is.

ZW: Future people, so how much we value people who aren't born yet.

TC: Will we care more about them?

ZW: Yes.

TC: I don't think we care that much about them now, unless they're our children or grandchildren. People, like you see you get older, you know more older people. They don't give a damn about their great grandkids. It's very interesting. Like it stops at the grandkids. I didn't know that when I was younger. That seems somehow like genetic propensity. I don't think it'll change. That to me stays flat. And you look at carbon taxes, almost all economists would say they make sense. I think they make sense. No one wants to do them. We had a few, they were like repealed or paired back because people don't give a damn enough. I don't see that changing.

ZW: The Abrahamic faiths as opposed to other forms of spirituality or religion.

TC: I think intellectuals in particular now in the West are becoming much more religious. They feel disillusioned with the political options, which from my point of view, and I think many would agree, all kind of stink a bit. So you go to religion, you go to the classic books. But I think the AIs will be oracles of a sort and will blend religions more and not worry about that. So people will be like part Christian, part Buddhist, part something else, and that will just be natural, and the AIs will somewhat somehow intermediate these different ideas, and it will actually work well enough. So they'll all go up in status. But probably nominal monotheism, in fact, practiced as a kind of semi-polytheism with the AIs as oracles is what will really become more significant.

ZW: Committees of humans, like the FOMC, the NSC, the Politburo.

TC: Well, they suck, right? And it will become more obvious. You'll be able to evaluate the committee by AI. At least at first, the committees will sneak use AI. I don't think at this moment AI is good enough to beat the Fed staff, but I'm not sure it's worse. So the Fed will build its own AI and the committee will become progressively less important, and being on the committee won't be this mark of status it had been. It's really the people who build the bridges and intermediate, like who cleans the data, who puts it into the Fed AI, who creates, you know, a cyber-secure Fed AI, and then who makes the pitch to the committee and the chair to actually do with the AI said, will be this complex organizational process. That'll be very important. In a lot of cases, I think we'll do it pretty well. But the committee itself will lose its luster.

ZW: Okay, last one for me. The greatest city in the world, London.

TC: Well, that's obvious. I don't see what the competition is. So Tokyo has remained provincial, though it's amazing, and it has the best food in the world, and it's cheap, but it's just not cosmopolitan enough. And the linguistic difficulties are too high. So that really can't win. New York is a contender, but like you've been to New York, right? Like so much of London is nice. In New York, maybe Riverside Park is nice. Maybe a few parts of Brooklyn, but not really. So you could have all this money, like you would want to live in London if you don't want to earn, you just want to consume. To me, this is an obvious first choice. I think the weather is better than people let on. It's not that bad. You're in the best time zone in the world. Airport could be better, but the other options are growing. And yeah, that to me seems pretty stable. I don't know if it will go up or down in status, but yeah, it's secure.

ZW: Okay, great. We'll take the first question from the room.

[We have removed the audience Q&A for the privacy of the questioners.]

Discussion about this video

User's avatar