For one, I think opening up the decision making on which problems to work on would be extremely helpful. Or more publicity to scientists around it. I’d like the support ai for science but largely never know where to start
You write, "Given the importance of scientific progress to almost every major economic, environmental and security goal, it follows that science, and the potential for AI to accelerate it, should be a top priority for any government."
What should be a priority for every thoughtful citizen is the attempt to understand how much knowledge and power human beings can successfully manage.
The science community's "more is better" relationship with knowledge essentially assumes that human beings are gods, capable of managing ANY amount of power delivered at ANY rate. Is that true? Are we gods?
If we're not gods, what are our limits? How fast are we moving towards those limits?
Deciding what is desirable / acceptable in the pursuit of knowledge is a philosophical / political question, which we don't address here directly, and of course extends beyond private sector labs. But some of the activities we discuss, such as problem selection, responsibility and monitoring could help identify and manage some of the implicit risks attached to picking problems/approaches to working on those problems.
Of course in other areas, the current challenge is too little knowledge rather than too much knowledge, and in many areas of science the more we learn, the less we know, as we just open up entirely new worlds of complexity. Are there specific areas that you have in mind Phil, when you call for limits?
Arenas like particle physics and genetic engineering coming to mind as areas of concern, but honestly, I don't feel particularly qualified to create a list of specific technologies which should be limited or banned.
My focus has always been more with the philosophical foundation of modern science, which I refer to as the "more is better" relationship with knowledge. I'm attempting to point out that such a "more is better" paradigm is in conflict with the limited nature of human ability.
In my view, the "more is better" relationship with knowledge made perfect sense in earlier times when our ability to develop new knowledge was very limited. And of course that knowledge philosophy delivered many benefits, too many to begin to list. But today, I think of the "more is better" relationship with knowledge concept as being a 19th century philosophy which could be said to have gone of date with the bombing of Hiroshima in 1945.
I don't see my writing on this topic to be particularly practical, given that I can't offer any specific policy prescriptions, and a decade of discussing this has taught me that this is too big of an idea for people in general to get, especially if they are in the knowledge development business. My best understanding now is that we aren't going to learn this through the processes of reason alone, but more likely through the lens of some historic calamity.
If you're still interested, here are two articles where I attempt to expand on this perspective.
The Logic Failure At The Heart Of The Modern World
This is such a wonderful piece, I hope to see more of this
Thanks Austin. More to come!
I really enjoy this as a direction for big tech policy teams, when a lot of times it feels like just damage control.
For one, I think opening up the decision making on which problems to work on would be extremely helpful. Or more publicity to scientists around it. I’d like the support ai for science but largely never know where to start
“Tell the stories”
This is an excellent writeup. Thank you for taking the time to write this
Thanks Devansh! We're big fans of your newsletter, so that means a lot.
No way. We should collab on a few posts.
Yes!
Shooting you a message right now
You write, "Given the importance of scientific progress to almost every major economic, environmental and security goal, it follows that science, and the potential for AI to accelerate it, should be a top priority for any government."
What should be a priority for every thoughtful citizen is the attempt to understand how much knowledge and power human beings can successfully manage.
The science community's "more is better" relationship with knowledge essentially assumes that human beings are gods, capable of managing ANY amount of power delivered at ANY rate. Is that true? Are we gods?
If we're not gods, what are our limits? How fast are we moving towards those limits?
Deciding what is desirable / acceptable in the pursuit of knowledge is a philosophical / political question, which we don't address here directly, and of course extends beyond private sector labs. But some of the activities we discuss, such as problem selection, responsibility and monitoring could help identify and manage some of the implicit risks attached to picking problems/approaches to working on those problems.
Of course in other areas, the current challenge is too little knowledge rather than too much knowledge, and in many areas of science the more we learn, the less we know, as we just open up entirely new worlds of complexity. Are there specific areas that you have in mind Phil, when you call for limits?
Thanks for your reply, appreciated.
Arenas like particle physics and genetic engineering coming to mind as areas of concern, but honestly, I don't feel particularly qualified to create a list of specific technologies which should be limited or banned.
My focus has always been more with the philosophical foundation of modern science, which I refer to as the "more is better" relationship with knowledge. I'm attempting to point out that such a "more is better" paradigm is in conflict with the limited nature of human ability.
In my view, the "more is better" relationship with knowledge made perfect sense in earlier times when our ability to develop new knowledge was very limited. And of course that knowledge philosophy delivered many benefits, too many to begin to list. But today, I think of the "more is better" relationship with knowledge concept as being a 19th century philosophy which could be said to have gone of date with the bombing of Hiroshima in 1945.
I don't see my writing on this topic to be particularly practical, given that I can't offer any specific policy prescriptions, and a decade of discussing this has taught me that this is too big of an idea for people in general to get, especially if they are in the knowledge development business. My best understanding now is that we aren't going to learn this through the processes of reason alone, but more likely through the lens of some historic calamity.
If you're still interested, here are two articles where I attempt to expand on this perspective.
The Logic Failure At The Heart Of The Modern World
https://www.tannytalk.com/p/the-logic-failure-at-the-heart-of
Our Relationship With Knowledge
https://www.tannytalk.com/p/our-relationship-with-knowledge
Thanks again for your response. I'd welcome any further discussion which may interest you to pursue.