11 Comments
User's avatar
Austin Morrissey's avatar

This is such a wonderful piece, I hope to see more of this

Expand full comment
AI Policy Perspectives's avatar

Thanks Austin. More to come!

Expand full comment
Nathan Lambert's avatar

I really enjoy this as a direction for big tech policy teams, when a lot of times it feels like just damage control.

Expand full comment
Nathan Lambert's avatar

For one, I think opening up the decision making on which problems to work on would be extremely helpful. Or more publicity to scientists around it. I’d like the support ai for science but largely never know where to start

Expand full comment
Nathan Lambert's avatar

“Tell the stories”

Expand full comment
Devansh's avatar

This is an excellent writeup. Thank you for taking the time to write this

Expand full comment
AI Policy Perspectives's avatar

Thanks Devansh! We're big fans of your newsletter, so that means a lot.

Expand full comment
Devansh's avatar

No way. We should collab on a few posts.

Expand full comment
Devansh's avatar

Shooting you a message right now

Expand full comment
User's avatar
Comment deleted
Dec 13, 2024
Comment deleted
Expand full comment
AI Policy Perspectives's avatar

Deciding what is desirable / acceptable in the pursuit of knowledge is a philosophical / political question, which we don't address here directly, and of course extends beyond private sector labs. But some of the activities we discuss, such as problem selection, responsibility and monitoring could help identify and manage some of the implicit risks attached to picking problems/approaches to working on those problems.

Of course in other areas, the current challenge is too little knowledge rather than too much knowledge, and in many areas of science the more we learn, the less we know, as we just open up entirely new worlds of complexity. Are there specific areas that you have in mind Phil, when you call for limits?

Expand full comment