11 Comments
User's avatar
Austin Morrissey's avatar

This is such a wonderful piece, I hope to see more of this

AI Policy Perspectives's avatar

Thanks Austin. More to come!

Nathan Lambert's avatar

I really enjoy this as a direction for big tech policy teams, when a lot of times it feels like just damage control.

Nathan Lambert's avatar

For one, I think opening up the decision making on which problems to work on would be extremely helpful. Or more publicity to scientists around it. I’d like the support ai for science but largely never know where to start

Nathan Lambert's avatar

“Tell the stories”

Devansh's avatar

This is an excellent writeup. Thank you for taking the time to write this

AI Policy Perspectives's avatar

Thanks Devansh! We're big fans of your newsletter, so that means a lot.

Devansh's avatar

No way. We should collab on a few posts.

Devansh's avatar

Shooting you a message right now

User's avatar
Comment deleted
Dec 13, 2024
Comment deleted
AI Policy Perspectives's avatar

Deciding what is desirable / acceptable in the pursuit of knowledge is a philosophical / political question, which we don't address here directly, and of course extends beyond private sector labs. But some of the activities we discuss, such as problem selection, responsibility and monitoring could help identify and manage some of the implicit risks attached to picking problems/approaches to working on those problems.

Of course in other areas, the current challenge is too little knowledge rather than too much knowledge, and in many areas of science the more we learn, the less we know, as we just open up entirely new worlds of complexity. Are there specific areas that you have in mind Phil, when you call for limits?