Monday, January 28, 2013

More on Nate Silver's Book

.How He Got It RightJanuary 10, 2013Andrew Hacker.E-mail Single Page Print Share 1

2→The Signal and the Noise: Why So Many Predictions Fail—But Some Don’t

by Nate Silver

Penguin, 534 pp., $27.95



The Physics of Wall Street: A Brief History of Predicting the Unpredictable

by James Owen Weatherall

Houghton Mifflin Harcourt, 286 pp., $27.00



Antifragile: Things That Gain from Disorder

by Nassim Nicholas Taleb

Random House, 519 pp., $30.00





Randy Stewart/CC/Uri Fintzy/JTA/www.jta.org



Statistician Nate Silver, who correctly predicted the winner of all fifty states and the District of Columbia in the 2012 presidential election

1.

Nate Silver called every state correctly in the last presidential race, and was wrong about only one in 2008. In 2012 he predicted Obama’s total of the popular vote within one tenth of a percent of the actual figure. His powers of prediction seemed uncanny. In his early and sustained prediction of an Obama victory, he was ahead of most polling organizations and my fellow political scientists. But buyers of his book, The Signal and the Noise, now a deserved best seller, may be in for something of a surprise. There’s only a short chapter on predicting elections, briefer than ones on baseball, weather, and chess. In fact, he’s written a serious treatise about the craft of prediction—without academic mathematics—cheerily aimed at lay readers. Silver’s coverage is polymathic, ranging from poker and earthquakes to climate change and terrorism.



We learn that while more statistics per capita are collected for baseball than perhaps any other human activity, seasoned scouts still surpass algorithms in predicting the performance of players. Since poker depends as much on luck as on skill, professionals make a living by having well-heeled amateurs at the table. The lesson from a long chapter on earthquakes is that while we’re good at measuring them, they’re “not really predictable at all.” Much the same caution holds for economists, whose forecasts of next year’s growth are seldom correct. Their models may be elegant, Silver says, but “their raw data isn’t much good.”



The most striking success has been in forecasting where hurricanes will hit. Over the last twenty-five years, the ability to pinpoint landfalls has increased twelvefold. At the same time, Silver says, newscasts purposely overpredict rain, since they know their listeners will be grateful when they find they don’t need umbrellas. While he doesn’t dismiss “highly mathematical and data-driven techniques,” he cautions climate modelers not to give out precise changes in temperature and ocean levels. He tells of attending a conference on terrorism at which a Coca-Cola marketing executive and a dating service consultant were asked for hints on how to identify suicide bombers.



Much is made of ours being an era of Big Data. Silver passes on an estimate from IBM that 2.5 quintillion (that’s seventeen zeros) new bytes (sequences of eight binary digits that each encode a single character of text in a computer) of data are being created every day, representing everything from the brand of toothpaste you bought yesterday to your location when you called a friend this morning. Such information can be put together to fashion personal profiles, which Amazon and Google are already doing in order to target advertisements more accurately. Obama’s tech-savvy workers did something similar, notably in identifying voters who needed extra prompting to go to the polls.1



Those daily quintillions are what led to Silver’s title. “Signals” are facts we want and need, such as those that will help us detect incipient shoe bombers. “Noise” is everything else, usually extraneous information that impedes or misleads our search for signals. Silver makes the failure to forecast September 11 a telling example.



But first, The Signal and the Noise is in large part a homage to Thomas Bayes (1701–1761), a long-neglected statistical scholar, especially by the university departments concerned with statistical methods. The Bayesian approach to probability is essentially simple: start by approximating the odds of something happening, then alter that figure as more findings come in. So it’s wholly empirical, rather than building edifices of equations.2 Silver has a diverting example on whether your spouse may be cheating. You might start with an out-of-the-air 4 percent likelihood. But a strange undergarment could raise it to 50 percent, after which the game’s afoot. This has importance, Silver suggests, because officials charged with anticipating terrorist acts had not conjured a Bayesian “prior” about the possible use of airplanes.



Silver is prepared to say, “We had some reason to think that an attack on the scale of September 11 was possible.” His Bayseian “prior” is that airplanes were targeted in the cases of an Air India flight in 1985 and Pan Am’s over Lockerbie three years later, albeit using secreted bombs, plus in later attempts that didn’t succeed. At the least, a chart with, say, a 4 percent likelihood of an attack should have been on someone’s wall. Granted, what comes in as intelligence is largely “noise.” (Most intercepted conversations are about plans for dinner.) Still, in the summer of 2001, staff members at a Minnesota flight school told FBI agents of a Moroccan-born student who wanted to learn to pilot a Boeing 747 in midair, skipping lessons on taking off and landing. Some FBI agents took the threat of Zacarias Moussaoui seriously, but several requests for search and wiretap warrants were denied. In fact, an instructor added that a fuel-laden plane could make a horrific weapon. At the least, these “signals” should have raised the probability of an attack using an airplane, say, to 15 percent, prompting visits to other flight schools.



Silver’s “mathematics of terrorism” may be stretching the odds a bit. Many of those daily quintillion digits flow into the FBI and CIA, not to mention the departments of State and Defense. To follow all of them up is patently impossible, with only a small fraction getting even a cursory second look. It’s bemusing that two recent revelations of marital infidelity—Eliot Spitzer and David Petraeus—arose from inquiries having other purposes. Plus there’s the question of how many investigators and investigations we want to have, as more searching will inevitably touch more of us.



Yet in the end, Silver’s claims are quite modest. Indeed, he could have well phrased his subtitle “why most predictions fail.” It’s simply because “the volume of information is increasing exponentially.”



There is no reason to conclude that the affairs of man are becoming more predictable. The opposite may well be true. The same sciences that uncover the laws of nature are making the organization of society more complex.

I’d only add that it’s not just what sciences are finding that makes the world seem more complex. Shifts in the structure of occupations, abetted by more college degrees, have increased the number of positions deemed to be professional. If entrepreneurs tend to be assessed by how much money they amass, professionals are rated by the presumed complexity of what they know and do. So to retain or raise an occupation’s status, tasks are made more mysterious, usually by taking what’s really simple and adding obfuscating layers. The very sciences Silver cites—especially those of a social sort—rank among the culprits.



2.

Nate Silver is known not so much for predicting who will win elections, but for how close he comes to the actual results. His final 2012 forecast gave Obama 50.8 percent of the popular vote, almost identical with his eventual figure of 50.9 percent. This kind of precision is striking. A more typical projection may warn that it has a three-point margin of error either way, meaning a candidate accorded 52 percent could end anywhere between 55 percent and 49 percent. Or, fearful of making a wrong call, as in 2000, polling agencies will claim that the outcome is too close to foretell. Still, it’s too early to hail a new statistical science. As can be seen in Table A, Rasmussen’s and Gallup’s final polls predicted that Romney would be the winner, while the Boston Herald gave its state’s senate race to Scott Brown.



In fact, I am impressed when polls come even close. To start, what’s needed is a reliable cross-section of people who will actually vote. In 2008, only 62 percent of eligible citizens cast ballots. In 2012, even fewer did. Not surprisingly, some people who seldom or never vote will still claim they’ll be turning out. Testing them (“can you tell me where your polling place is?”) can be time-consuming and expensive. And there are those who don’t report their real choices. But much more vexing is finding people willing to cooperate. According to a recent Pew Research Center report, only several years ago, in 1997, about 90 percent of a desired sample could be reached in person or at home by telephone, and 36 percent of them were amenable to an interview.



Today, with fewer people at home or picking up calls, and increasing refusals from those who do, the rates are down to 62 percent and 9 percent.3 So the polls must create a model of an electorate from the slender slice willing to give them time. Yet despite these hurdles, the Columbus Dispatch called Ohio’s result perfectly, using 1,501 respondents from the state’s 5,362,236 voters (the figures available on December 7).



Election polls are unique in at least two ways. First, they aim to tell us about a concrete act—a cast ballot—to be performed in an impending period of time. (Each year, more of us vote early.) It’s hard to think of other surveys that try to anticipate what a huge pool of adults will do. Second, how well a poll did becomes known once the votes are counted. So we find Nate Silver got it right and Rasmussen and Gallup didn’t. But a poll’s accuracy is only a historic curiosity after the returns are in. That is, it didn’t tell us anything lasting; just about a foray into forecasting during some months when a lot of us were wondering how events would turn out. Other polls tell us about something less fleeting: the opinions people hold on public issues and personal matters.



But with polls on opinions—military spending, say, or the provision of contraceptives—there’s seldom a subsequent vote that can validate findings. (To an extent, this is possible when there are statewide votes on issues like affirmative action and gay marriage.) A recourse is to compare a series of surveys that ask similar questions.



Yet as Table B shows, responses on abortion have been quite varied. What could be called the “pro-choice” side ranges across twenty-three percentage points. Certainly, how the question is phrased can skew the answers. CBS’s 42 percent agreed that abortion should be “generally available,” while Gallup’s 25 percent were supporting the view that abortion should be “always legal,” and The Washington Post’s 19 percent were for abortion to be “legal in all cases.” The short answer is that apart from the severe anti side, polling can’t give us specific figures on where most adults line up on abortion. Or, for that matter, any issue.



What goes on in the American mind remains a mystery that sampling is unlikely to unlock. In my estimate, the 65,075,450 people who chose Barack Obama and Joseph Biden over Mitt Romney and Paul Ryan were mainly expressing a moral mood, a feeling about the kind of country they want. I’d like to see Nate Silver using his statistical talents to explore such surmises.



We’ve been informed that 55 percent of women supported Obama, rising to 67 percent of those who are single, divorced, or widowed. Obama also secured 55 percent among holders of postgraduate degrees, and 69 percent of Jewish voters. But how can we know? Voting forms don’t ask for marital status or religion. The answer is that these and similar figures were extrapolated from a national sample of 26,563 voters, approached just after they cast their ballots or telephoned later in the day, by an organization called Edison Research.



The figures I’ve cited and others on the list look plausible to me. Still, there’s no way to check them; moreover, the Edison survey is the only post-election one that was done. So here’s a caveat: Jews are so small a fraction of the electorate that there were only 241 in the sample. Thus the abovementioned 69 percent comes with a seven-point margin of error either way, a caveat not noted in most media accounts.



Nate Silver doesn’t conduct his own polls. Rather, he collects a host of state and national reports, and enters them in a database of his own devising. Combining samples from varied surveys gives him a much larger pool of respondents and the potential for a more reliable profile. Of course, Silver doesn’t simply crunch whatever comes in. He factors in past predictions and looks for slipshod work, as when the Florida Times-Union on election eve gave the state to Romney, based on 681 interviews. He pays special attention to demographic shifts, such as a surge in registrations with Hispanic names. His model also draws on the Cook Political Report, which actually meets informally with candidates to assess their electoral appeal. In September, Silver set the odds of Obama’s winning at 85 percent, enough to withstand a dismal performance in the first debate, which hadn’t yet occurred.



1 See Michael Scherer, “Inside the Secret World of the Data Crunchers Who Helped Obama Win,” Time, November 7, 2012, and Nate Silver, “In Silicon Valley, Technology Talent Gap Threatens GOP Campaigns,” The New York Times, November 28, 2012. ↩

2 See Sharon Bertsch McGrayne’s superb The Theory That Would Not Die (Yale University Press, 2011). ↩

3 “Assessing the Representativeness of Public Opinion Surveys,” The Pew Research Center for the People and the Press, May 15, 2012. ↩



1

2→1

See Michael Scherer, “Inside the Secret World of the Data Crunchers Who Helped Obama Win,” Time, November 7, 2012, and Nate Silver, “In Silicon Valley, Technology Talent Gap Threatens GOP Campaigns,” The New York Times, November 28, 2012. ↩



2

See Sharon Bertsch McGrayne’s superb The Theory That Would Not Die (Yale University Press, 2011). ↩



3

“Assessing the Representativeness of Public Opinion Surveys,” The Pew Research Center for the People and the Press, May 15, 2012. ↩

No comments: