"Why Trust Science?": Naomi Oreskes on the long struggle for truth, and what went wrong

Science historian goes deep on how science came under attack from the right — and how it can bridge that gap


PAUL ROSENBERG
OCTOBER 13, 2019 2:30PM (UTC)
These are tough times for independent journalism. If you value Salon's original reporting and commentary, we urge you to support it — by supporting our writers directly. Right here, right now, you can make a financial contribution to help make Paul Rosenberg's work possible. All donations go directly to our writers. 
In December 2004, science historian Naomi Oreskes published a study in Science magazine, refuting the assertion that climate science is highly uncertain. In a literature survey of 928 papers, she found that 75% either explicitly or implicitly accepted the consensus explanation of human-caused global warming, while 25% of which took no position, dealing with other matters. "Remarkably, none of the papers disagreed with the consensus position," Oreskes wrote. 
Oreskes broadened her scope in 2010 to explore why that consensus remained obscured, blocking the vitally necessary action that science tells us is needed. With Erik Conway, she co-authored "Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming." 
Now she's broadened her scope further with a new book, "Why Trust Science?" which answers that question by telling the 200-year story of how scientists and those who study them have come to understand how and why science works. It isn't because of any magic-bullet method, or because scientists are godlike beings who don't make mistakes. It's superiors because of the social nature of their enterprise and how it turns potential sources of individual bias and error into a much stronger collective project, which has been made stronger by becoming more inclusive and more connected with the whole of humanity. Oreskes also explores five examples of “science gone awry” where others have claimed that scientific consensus has failed, but which turn out to illustrate significant aspects of the history she tells. I interviewed Oreskes by phone recently. This transcript has been edited for clarity and length.
Your book is called “Why Trust Science?” Why is it so important to answer that question right now? 
I think we've all seen the ways in which scientific conclusions have been challenged in recent years. Climate change is the obvious one that I’ve worked on.  We've known for more than 30 years now that the climate was changing because of human activity, but there's been both a lot of passive denial but also active attempts to discredit, undermine and challenge the science, which has led quite a few people on the conservative side of the political spectrum to distrust the science and distrust scientists. 
01:12/01:12
But it's not just climate change. We see it in the area of vaccine safety, we see it coming up now in whether or not we should be eating red meat, we see it in the latest issues about vaping, so there are many areas of public life where there are important decisions that people need to make, both at a personal level but also as a society as a whole. that really implicate us in understanding and accepting scientific results. That work has been made much more difficult by the generation of public distrust of science. 
You trace the development of our understanding of how and why science works, starting with the positivist explanation put forward roughly 200 years ago, which dominated for the first hundred years or so. How did it start off, and how was it refined?
It's a complicated history, and part of the challenge of this book is to take that and reduce it to a readable 200-page book. I start with positivism because it's probably the single most influential school of thought on the history of the philosophy of science, because many, many scientists in the 19th century were themselves very deeply influenced by it. When you think of positivism, you think of it as a kind of noble aspiration, it's the dream of positive knowledge, by which people mean knowledge that we could be certain was correct as in the English phase, "I'm absolutely, positively sure." 
The idea that we can achieve certain knowledge if only we would follow certain methodological practices turned out not to be true. So, one of the big questions of philosophy in the 20th century became, "Well, why didn't that dream of positive knowledge work? And if it doesn't work, then what can we do instead?" 
When I was in graduate school back in the 1980s, everyone said positivism is dead — although in some ways it actually wasn't.  But the big question was OK then, now what? For some people, the collapse of positivism became a license to distrust science, or to say that it was merely a social construction, or all kinds of other things and that never felt right to me — partly because I was trained as a scientist myself, I like science, I find science a very rewarding activity — and I also knew that the history of science showed that scientists had had many great success stories. So there had to be some explanation, some account that could make sense of how science could be successful, even if it wasn't the vision that the positivists had in the 19th century. 
So the logical positivists brought things to a certain point, and then Karl Popper sort of blew that up. 
Well, it wasn't only Popper, but yes. In the 20th century, the so-called logical positivists focused on the hypothetico-deductive model of science, because they see it as a logical structure which helps to explain the success of scientific work — you have a hypothesis, you deduce the consequences and then you see if those consequences are true. That's what I think most of us were taught in school was the scientific method. 
Popper said this doesn't make any sense, because if you look at the hypothetico-deductive model, logically it's wrong because a theory can predict an outcome and that outcome can come true, and yet that theory might still be incorrect. He argues that you could never prove a theory correct, but you could prove a theory incorrect. 
So he flips the logical positivist model on its head. He argues that the goal of science wasn't to prove theories but to falsify them — that good scientists would always be looking for evidence that their theories were wrong. One of his most famous essays is called "Conjectures and Reputations." He says basically that science is a whole process of conjectures: We make guesses. It doesn't really matter where the guesses come from, the important thing is that then we test it, and if it gets refuted, then we throw it out and we start again. 
You note that Popper’s epistemology, like his politics, was individualistic, but several people were responsible for changing to a more collective or sociological focus on the scientific community. You point to Ludwik Fleck and his idea of the thought collective as a starting point.
Fleck’s key contribution was to focus attention on the community structures of science. As you said, Popper was a radical individualist, both in his epistemology, and his politics. He argues that the good scientist [singular] should always be looking for refutations to his theories. 
Many people pointed out, "Well, Karl, that's really not what most scientists do. Scientists are trying to prove their theories." And Popper's response was to say, "Well then, they’re bad scientists." But that’s silly. Because if you say they’re bad scientists, then basically all scientists are bad scientists — which, incidentally, I think Popper believed. 
But Fleck says, "No, that's just silly. If you actually look at what scientists do, they're not doing what Popper says and they’re not doing it alone." What Fleck argued was that anybody who has ever done science — whether it's microbiology, geology, physics, whatever — what you always see, in fact what's one of the most conspicuous features of science, is its collective character. 
Unlike art, where you could find an artist working alone in their atelier, or poetry or writing or music, where you can find creative people working alone, in science, it's never that.  Because scientists bring their claims to each other to vet, to discuss, to adjust. 
This is the other critical part of Fleck's argument: When scientists do come up against a problem with the theory, you don't just throw it out the way Popper suggested. They try to figure out if there is some adjustment, some refinement that could solve the problem, and, in this way, Fleck argues, scientific ideas change over time. 
It's a bit like a game of telephone. At the end of the day, the idea you end up with could be quite different from what you started with, and you'd be very hard-pressed to say which of the individuals who had participated was responsible for the final outcome. He said, no, actually they're all responsible for the final outcome. Scientific knowledge is the product of whoever has participated in that conversation, and the outcome doesn't belong to any one person, it belongs to the community as a whole.
So Fleck is really important, though he didn't get that much attention in his day for his work. But it gets picked up by Thomas Kuhn, who makes it really the centerpiece of his argument in “The Structure of Scientific Revolutions,” the single most cited and published work ever in the philosophy of science.
Kuhn's work becomes super famous, super influential, and through Kuhn the notion of science as a collective or community activity becomes mainstream, so that by the time we get to the 1970s it's now really accepted that part of the reason critical rationalism failed (that's Popper's theory) is because it doesn't acknowledge the community structure. 
You also discuss another contributor to Kuhn’s work, Pierre Duhem. 
Duhem is my favorite, for a couple reasons. I love him because I was trained as a geochemist, I had to learn to the Gibbs-Duhem equation.  It was only many years later that I found out that Duhem was not just a great scientist, but also a historian and a philosopher of science as well. So he's my favorite, because Duhem has really thought hard about what it means to make a scientific discovery, have a scientific achievement. 
He’s also a religious Catholic, a very conservative figure intellectually and theologically, which makes him kind of fun in a way, because sometimes people see the social interpretation of science as somehow a left-wing position, and that's really silly, because, first of all, it's not. Duhem is just a fun character because he defies a lot of people’s stereotypes. 
He says the big thing that a lot of these other people are missing is what later comes to be known as "under-determination," although he doesn’t use that word. He says, when I do a scientific experiment I have a theory, I do an experiment to test it. If that theory goes wrong I don't just toss out the theory, because it's possible there’s something wrong with the theory, but it’s possible there's something wrong with my instrument or it's possible that I just didn't calibrate the instruments properly that day, or it's possible that I made some other assumption that somehow might be incorrect. 
So, he says, this is this key problem: If a test doesn't show the theory to be correct, how do we know what piece of the puzzle is the wrong piece? And what he says is: judgment. So, for example, if I do a test that seems to tell me that conservation of energy is incorrect, he says, I'm not going to believe that, because the conservation of energy is such a well-established idea, it works so well in so many different contexts. I'm going to assume that it’s something else. It's only after repeated efforts, if I can't find anything else wrong, we may go back and reconsider something as fundamental as conservation of energy. 
Although Thomas Kuhn doesn't acknowledge this explicitly, you see this is a real important argument for when he talks about paradigms and paradigm shifts. He makes essentially the same argument, that  If we find a problem — what Kuhn calls an "anomaly in the paradigm" — we want to sort out everything out within the paradigm. We work incredibly hard to try to figure out some way to resolve that anomaly, to blame it on an instrument or to find some smaller adjustment of the theory that will, as we said in the old days, "save the phenomena." It's only after repeated attempts to resolve the anomaly that we eventually maybe say, "OK, we have to revise the paradigm."
So Fleck and Duhem both paved the way for Kuhn and “The Structure of Scientific Revolutions.” You draw attention to a more nuanced picture in his earlier work, “The Copernican Revolution,” expressed in terms of "a bend in the road," and you call his own work a bend in the road in the story you trace. So what does that phrase mean, and why is it important?
The interesting thing about Kuhn is that lots of people read “The Structure of Scientific Revolutions,” but never read the previous book, where Kuhn developed his thinking and really engaged in a deep empirical way with a specific example in the history of science. In “The Copernican Revolution,” he does not describe a scientific revolution in the same way that he later does. He does not propose "incommensurability." 
Rather he says a scientific revolution is like a bend in the road: When you’re standing at the bend, you can see were you come from, but you can't see where you're going, and you can understand how the road you have been on has led you to this now important juncture. But once you go around the bend, often the rest of the road fades from view, you can't see it anymore. Then, in the future you look back and the whole thing seems radically different from what you believe now. You find yourself asking, "How can intelligent people ever have even thought that? It seems kind of crazy."
So part of the task of history, Kuhn argued, is to figure out what was the pathway that led intelligent people to believe certain things over time that then led them to believe something else. Once you’ve gone around the bend, the previous thing looks sort of incomprehensible. But it isn't really incomprehensible if you follow the steps. 
Now interestingly, later on Kuhn takes a more radical view. He actually comes to the conclusion that the new view is incommensurable with the old one. So, obviously, my views are closer to the early Kuhn: The apparent incommensurability of paradigms arises from an ahistorical view of science, where we see these things in isolation and we don't understand the process by which we got from one to the other.
So how does this apply to Kuhn’s own work, how is it a bend in the road, as you call it?
Because after Kuhn, no one really denied the social character of science. Everyone said, "Yes, that's right!" Scientists don't work alone, they work in communities and the outcome of scientific research is scientific consensus on a stable worldview which most of the time isn’t questioned. It's only questioned if something comes up that feels wrong. So that part of Kuhn's description felt really right to a lot of people, including a lot of working scientists. 
That move — to focus attention away from individuals and toward communities — was a permanent shift in the history of science. It even changed the way biography was done. My colleague Janet Browne has written an absolutely amazing two-volume biography of Darwin. Part of what she does in that biography is to situate Darwin in his time and place, and show you the network of people with whom Darwin was connected, with whom he was discussing his ideas, with whom he is corresponding, and how this is a crucial part of how he develops his ideas, not alone but in communication with other scientists.
So what then followed in the development of science studies? What was strengthened, and what emerged as problematic?
One of the interesting things about Kuhn's work, as I just said, is that scientists largely really liked it They saw a picture of science that made sense to them, that look familiar. A lot of scientists have accepted that and just sort of jumped over the incommensurability. They didn't pay much attention to it.  
With the social scientists, it was the opposite. They really glommed on to the incommensurability claim. They thought that was the most important part of the work. Because if scientific theories really are incommensurate and that there's no objective way to judge one paradigm to the other, but it’s simply a matter of the subjective judgment of the community, that opens the door to a whole lot of questions about scientific rationality. For some people that was an invitation to say that science wasn't rational at all, but it was as one critic called it, "mob rule."
For others it was an invitation to think hard about the social structures, the way interests played a role in what scientists decided and all the other arational and non-rational, irrational or super-rational factors that could play a role. My own view is that literature is very important, there was a lot of good work, but I also think it went too far. At least in some cases, it was a bit silly, because it did seem to imply that science was no different than any other human activity and therefore didn’t in fact merit any particular form of trust. That, I think, doesn't make sense because it doesn't explain how science should be as successful as it is.
So how did "feminist standpoint theory" provide a way out and a way forward?
This for me is the most fun part of the book, because feminist standpoint goes back to the 1980s. So it's already developing at the same time as science studies, but a lot of people in science studies, particularly the Edinburgh School, really ignored feminist standpoint epistemology, ignored feminist philosophy of science in general. Meanwhile a lot of scientists were horrified by it, thought it was an attack on science, got very defensive, very hostile. My argument is that if the scientists hadn’t been so busy taking offense, they would have realized that there was actually a tool there they could use. 
What feminist philosophy of science does is to explain how it's possible for science both to be a social enterprise, and to be objective. This is what people like Helen Longino and Sandra Harding call "strong objectivity." It’s shifting the locus of objectivity away from the character of the individual scientist,  saying objectivity is not a trait of an individual. 
I mean, an individual could be more or less objective, but that's not really what's of interest, because every individual has their subjective biases, that’s just life. But when individuals come together in groups in which the purpose is to vet claims in a rigorous critical way, then that diversity is actually becomes a strength. Because you're not claiming that any one individual has to be godlike and expunge all his or her preferences, you’re simply saying we can be regular, ordinary people who have opinions and views and prejudices, but when we come together in a group, we expose those prejudices or those preferences to scrutiny, and then we can kind of get past them. 
Therefore, the more diverse the group is, the better the odds we could get past them. So it becomes an argument for diversity, but not just at a kind of social justice level — which is not to say that social justice is unimportant — but in addition to the social justice argument, there’s also an epistemic argument.
We still have people arguing that those are opposed. But they’re actually synergistic.
Right. If my book could do one thing, my one wish would be to break through that logjam and to say exactly what you just said. The argument is that they’re synergistic. It's not that diversity functions in opposition to the goal of scientific rigor, it’s that actually diversity, when done right, would increasescientific rigor and objectivity. 
You make a related argument that values cannot be eliminated from science. They’re already embedded in it, because scientists are people. So what do we need to learn about the relationship of values and science?
In my own work I’ve become sensitive to the issue of how values inform the rejection of science. But I also know as a person, as a human being and a social scientist that the idea that scientists could expunge their values and look at anything in a completely value-neutral way, that's just not possible. That's a pipe dream. And even if it were possible — I mean, a person who had no values, we would consider that person to be a sociopath. The idea that it would be something to aspire to is actually kind of a weird idea. So what we do with that?
I want to say two things. The first one is that the feminist argument works with values, along with the objectivity argument: Rather than imagine an ideal world where people can expunge their values, we just accept that we all have values, but in a collective environment where we’re operating in good faith and we subject our claims to critical scrutiny, if our values are wrongly affecting our science — causing us to ignore evidence or to discount evidence because it's not consistent with our values — then a colleague can say, "Hey, you're ignoring this evidence." 
And in fact, this is exactly what the feminist evolutionary biologists did with like a lot of very sexist theories about human origins. Feminists pointed out, "Well, look at this evidence you're ignoring," and when they pointed it out there were a lot of scientists who said, "Oh, yeah, you know, you’re kinda right about that." So a lot of us really are willing to be open-minded if a person in good faith can point out that we’ve missed something or ignored something. So that’s the argument for strong objectivity.
There’s another point I bring out in the book that comes out of my personal experience trying to work on climate change. This is partly anecdotal but also fits with social science research. When scientists claim that they their work is value-free, for a lot of people, that just doesn't pass the laugh test. 
People think, "Well, that can’t be true," and so they become suspicious. They think, well, what are you trying to hide? This then feeds into certain right-wing conspiracy theories that scientists are really in cahoots with a global conspiracy to being down capitalism, or scientists are in cahoots with, I don't know, who knows what — there’s all kinds of weird conspiracy theories out there. 
These conspiracy theories, in some cases, can seem credible because scientists aren't actually telling the truth about their motivations and their values. If you leave a hole, if you leave a gap, if you don't tell people what's going on, it's easy for people to fill in that gap with their own imaginary whatever. 
My argument is, let’s be honest and open about our values, because I think when you do that what you discover is that actually you have good values. Most scientists I know have really good values, they want to make the world a better place, they want to help the economy by inventing technologies that generate jobs, they want to preserve the natural environment, they want to keep children safe from childhood diseases. These are all really meritorious things, so why hide them? Why not say, "I work on vaccine safety because I want children to be safe?" Or, "I work on climate change because I want your children to grow up in a world that is as beautiful and wonderful as the world we grew up in?" 
I have found that when I speak that way to people, they really like it! It moves them, and it makes me into a human being. We know from social scientists that people are more likely to accept information from a person they trust. 
So I think that we’ve made a mistake in refusing to talk about our values. I think it's good to talk about our values, and I'm not ashamed of my values. I think have good values. And I know, because I have Christian friends, that many of my values are shared by my Christian friends. So if I can talk about those values so that Christians who might otherwise be skeptical of a Jewish intellectual from the Northeast might say, "Oh, well, she’s Jewish, but we have the Ten Commandments, we actually agree about a lot of things."
You end up with a picture of science with no magic bullet to ensure scientific truth, but a powerful argument why we should trust scientists, due to the holistic picture you’ve painted. But you also at one point make a boiled-down argument by drawing an analogy with why we trust plumbers, which I thought was beautiful in its simplicity. Could you explain?
Part of it is an argument in favor of expertise. It's very trendy right now to disrespect experts, and to say experts are always getting things wrong. But in fact, most experts get things right. And the reason for that is they have specialized training. It’s not some magic trick. It’s that experts spend a lot of time learning how to do something. We all know that's true. 
The reason why we use plumbers — to use a mundane example out of everyday life we all know — we don't go to the dentist if we have a leak, we call a plumber. We do that because the person has specialized knowledge and skills that we don't have ourselves. 
So my argument is that scientist aren’t some special, canonized category of human beings, different from other kinds of experts. They’re simply the experts in our society who try and understand the natural world. So if I have a problem with my teeth, I will not go to a climate scientist. But if I have a question about the climate system, then I should be going to a climate scientist. It's the same with vaccinations or whatever else.
You go on to consider examples were some might claim that science failed. I'd like to talk specifically about the example of eugenics, because of its moral weight in the public imagination. You argue that it does not cast science into doubt but actually illustrates some of your arguments.
Eugenics is a really important example for two reasons. It was a horrible thing, and to the extent the scientists were involved in it, which they were, we can't sweep that under the rug. And the second reason is because climate change deniers love to use eugenics to try to discredit climate science. 
Particularly, Michael Crichton was a big advocate of the argument that scientists were wrong about eugenics, therefore we should not trust them about climate change. Now obviously that's a completely illogical argument: Just because some group of scientists a hundred years ago were wrong about X, that in no way proves that an entirely different group of scientists a hundred years later are wrong about Y. So it's an illogical argument, but it has a certain moral force because it is a reminder that scientists can sometimes go down tracks that are not just wrong intellectually, but also problematic morally. 
So I looked at that, and guess what: It turns out there was no consensus on eugenics. In fact, a really big argument took place in the 1920s about it, and it was an argument, where people thought hard about the relationship between the science and moral implications, and many of the people who objected were themselves scientists. Within the scientific community, there was a big, informed argument that was published in the pages of leading journals like Science and Nature
One of the groups of scientists that I was able to identify who were involved in objecting were socialists, particularly in the United Kingdom. There was a network of evolutionary biologists, people like J.D. Bernal and J.B.S. Haldane, and these guys pointed out the obvious class bias in a lot of eugenics programs. So this was a nice example of how values actually work in a good way to identify a problem with a scientific claim. This fits with the "strong objectivity" argument that if you have a diverse group of people, in this case politically diverse, they may point out certain assumptions that other people are not noticing. That's exactly what Bernal, Haldane and others did.
Where does all this leave us in the end? What conclusions should people walk away with from your book?
I'd say the key thing to think about is not to demand the impossible. In “The Little Prince,” Antoine de Saint-Exupéry has this line, roughly translated, “We have to demand of each person only that which they can give.” So in life we know we wouldn't expect a disabled person to run a marathon, or we wouldn't expect our plumber to fix our teeth. 
So I guess my meta-argument is about having realistic expectations about science.  Some of the difficulty we have is because we’ve set science on a pedestal. We’ve created very unrealistic expectations and then when science fails to live up to those expectations either we feel betrayed or someone else says, "Aha! They’re not the gods they claimed to be!" 
If we have realistic expectations, we don't have to be disappointed when sometimes things don't work out quite right. And we can also be more flexible about understanding that, yes, science is a process and we learn new things, and that's a good thing. It's good that scientific knowledge develops and advances. We don't have to take it as a betrayal if we discover something is different than what we thought before. We can view it as the progress of knowledge.

PAUL ROSENBERG

Paul Rosenberg is a California-based writer/activist, senior editor for Random Lengths News, and a columnist for Al Jazeera English. Follow him on Twitter at @PaulHRosenberg