It’s been a tough year for science. The American Statistical Association just issued a statement scolding scientists for misusing statistical analysis. Scientists continued to fight over an evaluation of 100 psychological studies, most of which could not be reproduced. Critics have cast doubt on a widely believed psychological theory of human willpower.
So yes, science is fallible. Scientists are only human and science is not a synonym for truth. It’s a bumpy, meandering road that heads in that general direction.
That makes skepticism good, up to a point. Beyond that point lie nonsense and superstition. The earth really is round.
So how do you tell what to believe?
It's a very old question. But there's no need to go back to Plato. Let's just start in the early 1950s, when the Nobel prizewinning chemist Irving Langmuir laid out a set of warning signs about identifying scientific ideas that might not conform to reality. He gave a handful of examples of what he called pathological science, including N-rays and mitogenic rays, neither of which exist despite being observed and measured in dozens of peer-reviewed experiments.
Something similar may be happening now with a psychological phenomenon known as ego depletion. The theory holds that humans can store up limited supplies of self-control. In the seminal 1997 experiment that seemed to confirm this theory, students who were allowed to eat radishes while foregoing a plate of cookies did worse on a subsequent task than students who were allowed to eat the cookies. Many more studies appeared to confirm the conclusion that will power weakens as it's used, like a tired muscle. But a new paper reports that recent attempts to replicate the evidence turned up no effect at all.
An article in Slate last week called this cause for alarm:
If something this well-established could fall apart, then what’s next? That’s not just worrying. It’s terrifying.
The situation with N-rays was pretty similar, according to Langmuir. Multiple experiments not only appeared to confirm their existence but break them down into different components whose optical parameters were measured with great precision.
In the 1920s, hundreds of papers were published on mitogenic rays, which scientists thought radiated from plants. Statistical analyses seemed to confirm that rays from onion roots would bend the orientation of other nearby onion roots unless they were separated by glass, which was thought to act as a ray blocker. It took years for scientists to come to the realization that these phenomena did not exist.
But scientists in physics and chemistry have learned from their mistakes. Langmuir saw a pattern to suspect science, which he reduced to six symptoms. One of the most relevant pertains to statistics – essentially that findings that are later discredited tend to be subtle effects, hard to distinguish mathematically from random noise.
Modern statistical tools can tease out subtle phenomena, but if not used carefully, they can also fool people into seeing patterns and trends that aren’t there.
The American Statistical Association came out this week with a statement outlining ways that scientists were using statistical tools incorrectly. The association's director, Ron Wasserstein, said the statement was prompted by concerns that misuse of statistics was contributing to a proliferation of questionable results, especially in the social sciences.
It was, however, the psychology community that recognized there might be a problem. In 2010, a paper claiming evidence for extrasensory perception got into a respected journal. Alarmed psychologists wondered whether other unlikely results had squeezed through the filters. Sure enough, a controversial paper published last summer claimed that of 100 psychology experiments, only 39 could be replicated. That figure has been disputed, ironically, on the grounds that the replicating team made statistical errors.
It’s not that social scientists are bad at math. They're not. But statistical analysis can fail from wishful thinking and subtler forms of self-delusion. Physical science has been around longer and has had more time to learn from past mistakes.
It’s also harder for social scientists to recognize another of Langmuir’s symptoms of pathology: “Fantastic theories contrary to experience.” This is related to the mantra that extraordinary claims require extraordinary evidence, which was apparently conceived of by the 18th-century philosopher David Hume but articulated succinctly by the 20th-century celebrity astronomer Carl Sagan.
Physicists today have broad, well-tested theoretical frameworks, and if a claim falls outside, they give it a closer look before believing it. That gives them an efficient means of expelling bunk.
For example, several years ago, physicists reported that a particle called a neutrino might have moved faster than the speed of light. Since this would violate Einstein’s theory of relativity, the community was skeptical despite mathematical calculations showing high statistical significance. The experimenters took a closer look and found a loose cable. Fixing it showed the neutrinos followed the laws of physics after all.
In the late 1980s, physicists claimed to have found a groundbreaking new form of energy known as cold fusion. Immediately physicists around the world tried to replicate it, and some got positive results. It took awhile for the physics mainstream to agree it didn’t exist, but when the stakes are high enough, things eventually get sorted out.
Last month, scientists claimed they confirmed Einstein’s theory in the form of gravitational waves, and that result has been more readily accepted. Climate change, while still uncertain in some of the details, is widely accepted because it’s consistent with well-known physics and chemistry, not just because of some published papers. Carbon dioxide, oxygen, nitrogen and other gases interact with sunlight in well-defined and well-tested ways. We know how much carbon dioxide has increased in the atmosphere and how that decreases the amount of the sun’s energy that gets radiated back to space.
Social science doesn’t have that kind of framework. Theories have limited domains. ESP sets off alarm bells because it would require some extraordinary physical mechanism. Ego depletion’s extraordinariness is harder to gauge.
The psychologist George Loewenstein, who has also written on the reproducibility problem, says the recent attention is already catalyzing better practices. That was the purpose of Langmuir’s warning half a century ago. He was not trying to flag cheating, but to explain instances in which scientists were “led astray by subjective effects, wishful thinking or threshold interactions.” Loewenstein tells his students to consider not just how to look for evidence that an idea is right, but how they might discover it’s wrong. That’s a critical thinking skill we all can use.
No comments:
Post a Comment