2 Comments
User's avatar
Steeven's avatar

> we might run experiments where we ask people: “Should the government use its budget to build more high-speed railways connecting cities, or should it focus more on local infrastructure?” People will report what they initially believe, and be assigned to a conversation with an AI that helps them explore the topic.

This is an interesting idea, but I’d guess the vast majority of people (myself included!) couldn’t have a rational opinion about this without developing expertise that would take more than one conversation. I’m not even sure questions like this have a solid answer since it depends on the cities being connected and the local infrastructure.

To fix this, I’d try to first gauge someone’s level of expertise in a topic, or even ask them about a weekly held belief they had where they don’t have strong outcome attachments, then apply AI to that conversation. I think there’s too much of a danger here where the human couldn’t detect nonsense arguments so there’s a higher risk of the AI manipulating because it’s so easy to get away with it

Conor Griffin's avatar

Indeed, I think capturing the baseline expertise will be very important, and understanding differences in how topic experts vs non-experts respond. And good to think through how to best do that.

On top of that though, there will likely be a need to capture the impact on non-experts, on topics like high-speed railway where public opinion can be at least somewhat influential to what governments do (see e.g. UK's recent experiences). But as you say, it should account for local contexts and dynamics. Thanks for reading!