Monday, January 28, 2013

New Glasses

Maybe I need new glasses. The stairs in our house sometimes look like an escalator and sometimes I have trouble finding the front door. I raid the ice box and take a bite of a squash before I realize it isn't a pear. Three week old leftovers are a health hazzard in our ice box. This morning I fumble with the thermostat and turn on the air-conditioner rather than the heat. The TV remote is a lost cause. Sheesh! I'm gonna get me some bold new glasses like Hillary Clinton and you'll see me coming a mile off.




Like · · Promote.

Kimberley Stewart and Sheila Bloom like this..View 2 more comments..Don Waller Squash for breakfast? You need coke bottle glasses

Yesterday at 10:24am via mobile · Like..Deborah Davis Weeks Please, none like hers!

23 hours ago · Like..Jody Britt Perhaps someone slipped you a pair of James Thurber's.

15 hours ago · Like..Fred Hudson If I could write like Thurber I'd proudly wear Thurber glasses.

8 hours ago · Like..

Southern Discomfort

Southern Discomfor by George Packer

January 21, 2013

The South; The Republican Party; Congress; Demographic Trends; W. J. Cash; “The Mind of the South”; The Dallas Cowboys The New Year’s Day vote in Congress that brought a temporary truce to the fiscal wars showed the Republicans to be far more divided than the Democrats, and the division broke along regional lines. House Republicans from the Far West and from the Northeast favored the Senate’s compromise bill by large margins, and Midwesterners were split; but in the South, Republican opposition was overwhelming, 81–12, accounting for more than half of the total Republican “no” votes. In other words, Republicans outside the South have begun to turn pink, following the political tendencies of the country as a whole, but Southern Republicans, who dominate the Party and its congressional leadership, remain deep scarlet. These numbers reveal something more than the character of today’s Republican Party; a larger historical shift is under way.



For a century after losing the Civil War, the South was America’s own colonial backwater—“not quite a nation within a nation, but the next thing to it,” W. J. Cash wrote in his classic 1941 study, “The Mind of the South.” From Tyler, Texas, to Roanoke, Virginia, Southern places felt unlike the rest of the country. The region was an American underbelly in the semi-tropical heat; the manners were softer, the violence swifter, the commerce slower, the thinking narrower, the past closer. It was called the Solid South, and it partly made up for economic weakness with the political strength that came from having a lock on the Democratic Party, which was led by shrewd septuagenarian committee chairmen.



The price was that the Democratic Party remained an anti-modern minority until the New Deal. As late as 1950, there were just three Republicans among the South’s hundred and nine congressmen, and none in the Senate; a decade later, the numbers had barely moved. In 1964, Lyndon Johnson said that breaking the Southern filibuster and passing civil-rights legislation would cost Democrats the South for a generation (he was too optimistic), but the region’s conservatism had already begun to push it toward the Republican Party. And, as the South became more Republican, it became more like the rest of America. Following the upheavals of the civil-rights years, the New South was born: the South of air-conditioned subdivisions, suburban office parks, and Walmart. Modernization was paved with federal dollars, in the form of highways, military bases, space centers, and tax breaks for oil drilling.



At the same time, the Southern way of life began to be embraced around the country until, in a sense, it came to stand for the “real America”: country music and Lynyrd Skynyrd, barbecue and NASCAR, political conservatism, God and guns, the code of masculinity, militarization, hostility to unions, and suspicion of government authority, especially in Washington, D.C. (despite its largesse). In 1978, the Dallas Cowboys laid claim to the title of “America’s team”—something the San Francisco 49ers never would have attempted. In Palo Alto, of all places, the cool way to express rebellion in your high-school yearbook was with a Confederate flag. That same year, the tax revolt began, in California.





from the issuecartoon banke-mail thisThe Southernization of American life was an expression of the great turn away from the centralized liberalism that had governed the country from the Presidencies of F.D.R. to Nixon. Every President elected between 1976 and 2004 was, by birth or by choice, a Southerner, except Ronald Reagan, who enjoyed a sort of honorary status. (When he began the 1980 campaign in Philadelphia, Mississippi, scene of the murder, in 1964, of three civil-rights workers, many Southerners heard it as a dog whistle.) A Southern accent, once thought quaint or even backward, became an emblem of American authenticity, a political trump card. It was a truism that no Democrat could win the White House unless he spoke with a drawl.



Now the South is becoming isolated again. Every demographic and political trend that helped to reĆ«lect Barack Obama runs counter to the region’s self-definition: the emergence of a younger, more diverse, more secular electorate, with a libertarian bias on social issues and immigration; the decline of the exurban life style, following the housing bust; the class politics, anathema to pro-business Southerners, that rose with the recession; the end of America’s protracted wars, with cuts in military spending bound to come. The Solid South speaks less and less for America and more and more for itself alone.



Solidity has always been the South’s strength, and its weakness. The same Southern lock that once held the Democratic Party now divides the Republican Party from the socially liberal, fiscally moderate tendencies of the rest of America. The Southern bloc in the House majority can still prevent the President from enjoying any major legislative achievements, but it has no chance of enacting an agenda, and it’s unlikely to produce a nationally popular figure.



As its political power declines, the South might occupy a place like Scotland’s in the United Kingdom, as a cultural draw for the rest of the country, with a hint of the theme park. Country music and NASCAR remain huge. Alabama teams have won the past four college football titles. After the Crimson Tide’s big win over Notre Dame on January 7th, a Web site called Real Southern Men explained the significance in terms of regional defiance: “Football matters here, because it is symbolic of the fight we all fight. Winning matters here, because it is symbolic of the victories we all seek. Trophies matter here, because they are symbolic of the respect we deserve but so rarely receive.” That defiance is a sure sign, like Governor Rick Perry’s loose talk of Texas seceding, that Southernization has run its course.



Northern liberals should not be too quick to cheer, though. At the end of “The Mind of the South,” Cash has this description of “the South at its best”: “proud, brave, honorable by its lights, courteous, personally generous, loyal.” These remain qualities that the rest of the country needs and often calls on. The South’s vices—“violence, intolerance, aversion and suspicion toward new ideas”—grow particularly acute during periods when it is marginalized and left behind. An estrangement between the South and the rest of the country would bring out the worst in both—dangerous insularity in the first, smug self-deception in the second.



Southern political passions have always been rooted in sometimes extreme ideas of morality, which has meant, in recent years, abortion and school prayer. But there is a largely forgotten Southern history, beyond the well-known heroics of the civil-rights movement, of struggle against poverty and injustice, led by writers, preachers, farmers, rabble-rousers, and even politicians, speaking a rich language of indignation. The region is not entirely defined by Jim DeMint, Sam Walton, and the Tide’s A J McCarron. It would be better for America as well as for the South if Southerners rediscovered their hidden past and took up the painful task of refashioning an identity that no longer inspires their countrymen. ♦





Read more: http://www.newyorker.com/talk/comment/2013/01/21/130121taco_talk_packer#ixzz2JJBXkpxf

Dumb Southerners?

by Garry Wills



George Packer’s recent New Yorker comments on the South made me sort out my own complicated feelings about the region. Both sides of my family are from the South: my mother’s from Georgia, my father’s from Virginia. Though my parents left Atlanta soon after I was born there, we often visited southern relatives in Atlanta, Louisville, and Birmingham. I preferred those who had stayed in the South to those who moved north. My Irish grandmother in Atlanta was a warm-hearted Catholic, while my English grandmother in Chicago was a pinched Christian Scientist always correcting her family. But even apart from the contrast in grandmothers, I always liked the South, though my northern accent made me an outsider there as a child (the family “Yankee”).



One reason I like the South is that I am conservative by temperament—multa tenens antiqua, as Ennius put it, “tenacious of antiquity.” A sense of the past helps explain why America’s southern writers were to the rest of America, in the twentieth century, what Irish writers were to England. The English had Oscar Wilde, William Butler Yeats, Sean O’Casey, Bernard Shaw, James Joyce and Samuel Beckett. We (whose relevant region is larger) had Flannery O’Connor, William Faulkner, Thomas Wolfe, Richard Wright, Eudora Welty, Ralph Ellison, Robert Penn Warren, Truman Capote, Harper Lee, John Crowe Ransom, Erskine Caldwell, Andrew Lytle, and Carson McCullers.



The South escaped one of the worst character traits of America, its sappy optimism, its weakness of positive thinking. The North puffed confidently into the future, Panglossian about progress, always bound to win. But the South had lost. It knew there was an America that could be defeated. That made it capable of facing tragedy, as many in America were not. This improved its literature, but impoverished other things. Yet poverty did not make the South helpless. In fact, straitened circumstances made it readier to grab what it could get. In its long bargain with the Democratic party, for instance, it not only fended off attacks on its Jim Crow remnant of the Old Confederacy, but gamed the big government system through canny old codgers in Washington—the chairmen of the major congressional committees, who sluiced needed assistance to the South during the Great Depression.



Under the tattered robes of Miss Havisham were hidden the preying hands of the Artful Dodger. Southerners were not really trapped in the past, since they were always scheming to get out of the trap. They were defeated but not dumb. With dreams of an agrarian society, they might denounce the industrial north, but they got the funds to bring electricity to large parts of the South from the government’s Tennessee Valley Authority. They wanted and got government-funded port facilities, oil subsidies in Louisiana, highways and airports and military bases.



But the current South is willing to cut off its own nose to show contempt for the government. Governor Rick Scott of Florida turned down more than $2 billion in federal funds for a high-speed rail system in Florida that would have created jobs and millions of dollars in revenues, just to show he was independent of the hated federal government. In this mood, his forebears would have turned down TVA. People across the South are going even farther than Scott, begging to secede again from the Union. Packer notes that the tea is cooling in parties across the rest of the nation, but seems to be fermenting to a more toxic brew in the South. No one needs better health care more than the South, but it fights it off so long as Obama is offering it, its governors turning down funds for Medicaid. This is a region that rejects sex education, though its rate of teenage pregnancies is double and in places triple that of New England. It fights federal help with education, preferring to inoculate its children against science by denying evolution.



No part of the country will suffer the effects of global warming earlier or with more devastation than the South, yet its politicians resist measures to curb carbon emissions and deny the very existence of climate change—sending it to the dungeon with evolution and biblical errancy. One doesn’t need much imagination to see the South with lowered or swollen waters in its rivers and ports, raging kudzu, swarming mosquitos, and record-breaking high temperatures, still telling itself that global-warming talk is just a liberal conspiracy. But it just digs deeper in denial. The South has decided to be defeated and dumb.



Humans should always cling to what is good about their heritage, but that depends on being able to separate what is good from what is bad. It is noble to oppose mindless change, so long as that does not commit you to rejecting change itself. The South defeats its own cause when it cannot discriminate between the good and the evil in its past, or pretends that the latter does not linger on into the present: Some in the South deny that the legacy of slavery exists at all in our time. The best South, exemplified by the writers listed above, never lost sight of that fact. Where are the writers of that stature today in the Tea Party South? I was made aware of the odd mix of gain and loss when I went back to Atlanta to see my beloved grandmother. She told me not to hold change between my lips while groping for a pocket to put it in—“That might have been in a nigger’s mouth.” Once, when she took me to Mass, she walked out of the church when a black priest came out to celebrate. I wondered why, since she would sit and eat with a black woman who helped her with housework. “It is the dignity—I would not let him take the Lord in his hands.”



Tradition dies hard, hardest among those who cannot admit to the toll it has taken on them. That is why the worst aspects of the South are resurfacing under Obama’s presidency. It is the dignity. That a black should have not merely rights but prominence, authority, and even awe—that is what many Southerners cannot stomach. They would let him ride on the bus, or get into Ivy League schools. But he must be kept from the altar; he cannot perform the secular equivalent of taking the Lord in his hands. It is the dignity.



This is the thing that makes the South the distillation point for all the fugitive extremisms of our time, the heart of Say-No Republicanism, the home of lost causes and nostalgic lunacy. It is as if the whole continent were tipped upward, so that the scattered crazinesses might slide down to the bottom. The South has often been defeated. Now it is defeating itself.



January 21, 2013, 3:26 p.m.



More on Nate Silver's Book

.How He Got It RightJanuary 10, 2013Andrew Hacker.E-mail Single Page Print Share 1

2→The Signal and the Noise: Why So Many Predictions Fail—But Some Don’t

by Nate Silver

Penguin, 534 pp., $27.95



The Physics of Wall Street: A Brief History of Predicting the Unpredictable

by James Owen Weatherall

Houghton Mifflin Harcourt, 286 pp., $27.00



Antifragile: Things That Gain from Disorder

by Nassim Nicholas Taleb

Random House, 519 pp., $30.00





Randy Stewart/CC/Uri Fintzy/JTA/www.jta.org



Statistician Nate Silver, who correctly predicted the winner of all fifty states and the District of Columbia in the 2012 presidential election

1.

Nate Silver called every state correctly in the last presidential race, and was wrong about only one in 2008. In 2012 he predicted Obama’s total of the popular vote within one tenth of a percent of the actual figure. His powers of prediction seemed uncanny. In his early and sustained prediction of an Obama victory, he was ahead of most polling organizations and my fellow political scientists. But buyers of his book, The Signal and the Noise, now a deserved best seller, may be in for something of a surprise. There’s only a short chapter on predicting elections, briefer than ones on baseball, weather, and chess. In fact, he’s written a serious treatise about the craft of prediction—without academic mathematics—cheerily aimed at lay readers. Silver’s coverage is polymathic, ranging from poker and earthquakes to climate change and terrorism.



We learn that while more statistics per capita are collected for baseball than perhaps any other human activity, seasoned scouts still surpass algorithms in predicting the performance of players. Since poker depends as much on luck as on skill, professionals make a living by having well-heeled amateurs at the table. The lesson from a long chapter on earthquakes is that while we’re good at measuring them, they’re “not really predictable at all.” Much the same caution holds for economists, whose forecasts of next year’s growth are seldom correct. Their models may be elegant, Silver says, but “their raw data isn’t much good.”



The most striking success has been in forecasting where hurricanes will hit. Over the last twenty-five years, the ability to pinpoint landfalls has increased twelvefold. At the same time, Silver says, newscasts purposely overpredict rain, since they know their listeners will be grateful when they find they don’t need umbrellas. While he doesn’t dismiss “highly mathematical and data-driven techniques,” he cautions climate modelers not to give out precise changes in temperature and ocean levels. He tells of attending a conference on terrorism at which a Coca-Cola marketing executive and a dating service consultant were asked for hints on how to identify suicide bombers.



Much is made of ours being an era of Big Data. Silver passes on an estimate from IBM that 2.5 quintillion (that’s seventeen zeros) new bytes (sequences of eight binary digits that each encode a single character of text in a computer) of data are being created every day, representing everything from the brand of toothpaste you bought yesterday to your location when you called a friend this morning. Such information can be put together to fashion personal profiles, which Amazon and Google are already doing in order to target advertisements more accurately. Obama’s tech-savvy workers did something similar, notably in identifying voters who needed extra prompting to go to the polls.1



Those daily quintillions are what led to Silver’s title. “Signals” are facts we want and need, such as those that will help us detect incipient shoe bombers. “Noise” is everything else, usually extraneous information that impedes or misleads our search for signals. Silver makes the failure to forecast September 11 a telling example.



But first, The Signal and the Noise is in large part a homage to Thomas Bayes (1701–1761), a long-neglected statistical scholar, especially by the university departments concerned with statistical methods. The Bayesian approach to probability is essentially simple: start by approximating the odds of something happening, then alter that figure as more findings come in. So it’s wholly empirical, rather than building edifices of equations.2 Silver has a diverting example on whether your spouse may be cheating. You might start with an out-of-the-air 4 percent likelihood. But a strange undergarment could raise it to 50 percent, after which the game’s afoot. This has importance, Silver suggests, because officials charged with anticipating terrorist acts had not conjured a Bayesian “prior” about the possible use of airplanes.



Silver is prepared to say, “We had some reason to think that an attack on the scale of September 11 was possible.” His Bayseian “prior” is that airplanes were targeted in the cases of an Air India flight in 1985 and Pan Am’s over Lockerbie three years later, albeit using secreted bombs, plus in later attempts that didn’t succeed. At the least, a chart with, say, a 4 percent likelihood of an attack should have been on someone’s wall. Granted, what comes in as intelligence is largely “noise.” (Most intercepted conversations are about plans for dinner.) Still, in the summer of 2001, staff members at a Minnesota flight school told FBI agents of a Moroccan-born student who wanted to learn to pilot a Boeing 747 in midair, skipping lessons on taking off and landing. Some FBI agents took the threat of Zacarias Moussaoui seriously, but several requests for search and wiretap warrants were denied. In fact, an instructor added that a fuel-laden plane could make a horrific weapon. At the least, these “signals” should have raised the probability of an attack using an airplane, say, to 15 percent, prompting visits to other flight schools.



Silver’s “mathematics of terrorism” may be stretching the odds a bit. Many of those daily quintillion digits flow into the FBI and CIA, not to mention the departments of State and Defense. To follow all of them up is patently impossible, with only a small fraction getting even a cursory second look. It’s bemusing that two recent revelations of marital infidelity—Eliot Spitzer and David Petraeus—arose from inquiries having other purposes. Plus there’s the question of how many investigators and investigations we want to have, as more searching will inevitably touch more of us.



Yet in the end, Silver’s claims are quite modest. Indeed, he could have well phrased his subtitle “why most predictions fail.” It’s simply because “the volume of information is increasing exponentially.”



There is no reason to conclude that the affairs of man are becoming more predictable. The opposite may well be true. The same sciences that uncover the laws of nature are making the organization of society more complex.

I’d only add that it’s not just what sciences are finding that makes the world seem more complex. Shifts in the structure of occupations, abetted by more college degrees, have increased the number of positions deemed to be professional. If entrepreneurs tend to be assessed by how much money they amass, professionals are rated by the presumed complexity of what they know and do. So to retain or raise an occupation’s status, tasks are made more mysterious, usually by taking what’s really simple and adding obfuscating layers. The very sciences Silver cites—especially those of a social sort—rank among the culprits.



2.

Nate Silver is known not so much for predicting who will win elections, but for how close he comes to the actual results. His final 2012 forecast gave Obama 50.8 percent of the popular vote, almost identical with his eventual figure of 50.9 percent. This kind of precision is striking. A more typical projection may warn that it has a three-point margin of error either way, meaning a candidate accorded 52 percent could end anywhere between 55 percent and 49 percent. Or, fearful of making a wrong call, as in 2000, polling agencies will claim that the outcome is too close to foretell. Still, it’s too early to hail a new statistical science. As can be seen in Table A, Rasmussen’s and Gallup’s final polls predicted that Romney would be the winner, while the Boston Herald gave its state’s senate race to Scott Brown.



In fact, I am impressed when polls come even close. To start, what’s needed is a reliable cross-section of people who will actually vote. In 2008, only 62 percent of eligible citizens cast ballots. In 2012, even fewer did. Not surprisingly, some people who seldom or never vote will still claim they’ll be turning out. Testing them (“can you tell me where your polling place is?”) can be time-consuming and expensive. And there are those who don’t report their real choices. But much more vexing is finding people willing to cooperate. According to a recent Pew Research Center report, only several years ago, in 1997, about 90 percent of a desired sample could be reached in person or at home by telephone, and 36 percent of them were amenable to an interview.



Today, with fewer people at home or picking up calls, and increasing refusals from those who do, the rates are down to 62 percent and 9 percent.3 So the polls must create a model of an electorate from the slender slice willing to give them time. Yet despite these hurdles, the Columbus Dispatch called Ohio’s result perfectly, using 1,501 respondents from the state’s 5,362,236 voters (the figures available on December 7).



Election polls are unique in at least two ways. First, they aim to tell us about a concrete act—a cast ballot—to be performed in an impending period of time. (Each year, more of us vote early.) It’s hard to think of other surveys that try to anticipate what a huge pool of adults will do. Second, how well a poll did becomes known once the votes are counted. So we find Nate Silver got it right and Rasmussen and Gallup didn’t. But a poll’s accuracy is only a historic curiosity after the returns are in. That is, it didn’t tell us anything lasting; just about a foray into forecasting during some months when a lot of us were wondering how events would turn out. Other polls tell us about something less fleeting: the opinions people hold on public issues and personal matters.



But with polls on opinions—military spending, say, or the provision of contraceptives—there’s seldom a subsequent vote that can validate findings. (To an extent, this is possible when there are statewide votes on issues like affirmative action and gay marriage.) A recourse is to compare a series of surveys that ask similar questions.



Yet as Table B shows, responses on abortion have been quite varied. What could be called the “pro-choice” side ranges across twenty-three percentage points. Certainly, how the question is phrased can skew the answers. CBS’s 42 percent agreed that abortion should be “generally available,” while Gallup’s 25 percent were supporting the view that abortion should be “always legal,” and The Washington Post’s 19 percent were for abortion to be “legal in all cases.” The short answer is that apart from the severe anti side, polling can’t give us specific figures on where most adults line up on abortion. Or, for that matter, any issue.



What goes on in the American mind remains a mystery that sampling is unlikely to unlock. In my estimate, the 65,075,450 people who chose Barack Obama and Joseph Biden over Mitt Romney and Paul Ryan were mainly expressing a moral mood, a feeling about the kind of country they want. I’d like to see Nate Silver using his statistical talents to explore such surmises.



We’ve been informed that 55 percent of women supported Obama, rising to 67 percent of those who are single, divorced, or widowed. Obama also secured 55 percent among holders of postgraduate degrees, and 69 percent of Jewish voters. But how can we know? Voting forms don’t ask for marital status or religion. The answer is that these and similar figures were extrapolated from a national sample of 26,563 voters, approached just after they cast their ballots or telephoned later in the day, by an organization called Edison Research.



The figures I’ve cited and others on the list look plausible to me. Still, there’s no way to check them; moreover, the Edison survey is the only post-election one that was done. So here’s a caveat: Jews are so small a fraction of the electorate that there were only 241 in the sample. Thus the abovementioned 69 percent comes with a seven-point margin of error either way, a caveat not noted in most media accounts.



Nate Silver doesn’t conduct his own polls. Rather, he collects a host of state and national reports, and enters them in a database of his own devising. Combining samples from varied surveys gives him a much larger pool of respondents and the potential for a more reliable profile. Of course, Silver doesn’t simply crunch whatever comes in. He factors in past predictions and looks for slipshod work, as when the Florida Times-Union on election eve gave the state to Romney, based on 681 interviews. He pays special attention to demographic shifts, such as a surge in registrations with Hispanic names. His model also draws on the Cook Political Report, which actually meets informally with candidates to assess their electoral appeal. In September, Silver set the odds of Obama’s winning at 85 percent, enough to withstand a dismal performance in the first debate, which hadn’t yet occurred.



1 See Michael Scherer, “Inside the Secret World of the Data Crunchers Who Helped Obama Win,” Time, November 7, 2012, and Nate Silver, “In Silicon Valley, Technology Talent Gap Threatens GOP Campaigns,” The New York Times, November 28, 2012. ↩

2 See Sharon Bertsch McGrayne’s superb The Theory That Would Not Die (Yale University Press, 2011). ↩

3 “Assessing the Representativeness of Public Opinion Surveys,” The Pew Research Center for the People and the Press, May 15, 2012. ↩



1

2→1

See Michael Scherer, “Inside the Secret World of the Data Crunchers Who Helped Obama Win,” Time, November 7, 2012, and Nate Silver, “In Silicon Valley, Technology Talent Gap Threatens GOP Campaigns,” The New York Times, November 28, 2012. ↩



2

See Sharon Bertsch McGrayne’s superb The Theory That Would Not Die (Yale University Press, 2011). ↩



3

“Assessing the Representativeness of Public Opinion Surveys,” The Pew Research Center for the People and the Press, May 15, 2012. ↩

Sunday, January 27, 2013

Lisa Gardner - Catch Me

Occasionally I tend to read crime fiction just for fun.  I had read two Gardner books before, one I liked and one I did not like.  You read a mystery/crime story like this and at first it's fun as you get caught up in the plot but in this one, I am disappointed at the end.  I think: why do I read mass market stuff like this?  Well, at least I don't read much of it!

Flight

I saw the movie yesterday and I am not kidding when I say it's one of the best movies I've ever seen.  I say that because this movie is a tense, fast moving drama, with a clear story to tell.  Daniel Day-Lewis will probably win the Oscar for best actor, but if I had a vote I'd vote for Denzil Washington.  In essence it's about an alcoholic/drug addict facing up to his problem.  "Whit" is a pilot who miraculously crash lands his plane and is first a hero.  But then evidence mounts that he was drunk flying the plane.  The picture comes to a totally satsfying and ediffying conclusion, which is another reason it's a great movie.  I like films with clear endings.  Enough said.

Saturday, January 26, 2013

Concerning Nate Silver

« Why We Should Memorize Main January 25, 2013


What Nate Silver Gets Wrong

Posted by Gary Marcus and Ernest Davis



Can Nate Silver do no wrong? Between elections and baseball statistics, Silver has become America’s secular god of predictions. And now he has a best-seller, “The Signal and the Noise,” in which he discusses the challenges and science of prediction in a wide range of domains, covering politics, sports, earthquakes, epidemics, economics, and climate change. How does a predictor go about making accurate predictions? Why are certain types of predictions, like when earthquakes will hit, so difficult? For any lay reader wanting to know more about the statistics and the art of prediction, the book should be essential reading. Just about the only thing seriously wrong with the book lies in its core technical claim.



Broadly speaking, prediction consists of three parts: dynamic modelling, data analysis, and human judgments. One of the most valuable parts of the book is the way Silver describes the interactions between these different elements of predictions, including cases in which mathematical modelling can’t fully replace human judgment. Weather predictions, for example, that combine judgment with computation are between ten and twenty-five per cent more accurate than those that rely on computer programs alone. In Major League Baseball, most teams ultimately rely on a combination of hard-nosed stats and old-school human scouts. Silver also does a great job of rejecting the notorious (but ridiculous) prediction made by Chris Anderson, in Wired, in 2008, that Big Data would make the development of scientific theories and models “obsolete”; as Silver notes, raw data, no matter how extensive, are useless without a model: “Numbers have no way of speaking for themselves…. Data-driven predictions can succeed—and they can fail. It is when we deny our role in the process that the odds of failure rise.”



Silver’s one misstep comes in his advocacy of an approach known as Bayesian inference. According to Silver’s excited introduction,


Bayes’ theorem is nominally a mathematical formula. But it is really much more than that. It implies that we must think differently about our ideas.



Lost until Chapter 8 is the fact that the approach Silver lobbies for is hardly an innovation; instead (as he ultimately acknowledges), it is built around a two-hundred-fifty-year-old theorem that is usually taught in the first weeks of college probability courses. More than that, as valuable as the approach is, most statisticians see it is as only a partial solution to a very large problem.



A Bayesian approach is particularly useful when predicting outcome probabilities in cases where one has strong prior knowledge of a situation. Suppose, for instance (borrowing an old example that Silver revives), that a woman in her forties goes for a mammogram and receives bad news: a “positive” mammogram. However, since not every positive result is real, what is the probability that she actually has breast cancer? To calculate this, we need to know four numbers. The fraction of women in their forties who have breast cancer is 0.014, which is about one in seventy. The fraction who do not have breast cancer is therefore 1 - 0.014 = 0.986. These fractions are known as the prior probabilities. The probability that a woman who has breast cancer will get a positive result on a mammogram is 0.75. The probability that a woman who does not have breast cancer will get a false positive on a mammogram is 0.1. These are known as the conditional probabilities. Applying Bayes’s theorem, we can conclude that, among women who get a positive result, the fraction who actually have breast cancer is (0.014 x 0.75) / ((0.014 x 0.75) + (0.986 x 0.1)) = 0.1, approximately. That is, once we have seen the test result, the chance is about ninety per cent that it is a false positive. In this instance, Bayes’s theorem is the perfect tool for the job.



This technique can be extended to all kinds of other applications. In one of the best chapters in the book, Silver gives a step-by-step description of the use of probabilistic reasoning in placing bets while playing a hand of Texas Hold ’em, taking into account the probabilities on the cards that have been dealt and that will be dealt; the information about opponents’ hands that you can glean from the bets they have placed; and your general judgment of what kind of players they are (aggressive, cautious, stupid, etc.).



But the Bayesian approach is much less helpful when there is no consensus about what the prior probabilities should be. For example, in a notorious series of experiments, Stanley Milgram showed that many people would torture a victim if they were told that it was for the good of science. Before these experiments were carried out, should these results have been assigned a low prior (because no one would suppose that they themselves would do this) or a high prior (because we know that people accept authority)? In actual practice, the method of evaluation most scientists use most of the time is a variant of a technique proposed by the statistician Ronald Fisher in the early 1900s. Roughly speaking, in this approach, a hypothesis is considered validated by data only if the data pass a test that would be failed ninety-five or ninety-nine per cent of the time if the data were generated randomly. The advantage of Fisher’s approach (which is by no means perfect) is that to some degree it sidesteps the problem of estimating priors where no sufficient advance information exists. In the vast majority of scientific papers, Fisher’s statistics (and more sophisticated statistics in that tradition) are used.



Unfortunately, Silver’s discussion of alternatives to the Bayesian approach is dismissive, incomplete, and misleading. In some cases, Silver tends to attribute successful reasoning to the use of Bayesian methods without any evidence that those particular analyses were actually performed in Bayesian fashion. For instance, he writes about Bob Voulgaris, a basketball gambler,





Bob’s money is on Bayes too. He does not literally apply Bayes’ theorem every time he makes a prediction. But his practice of testing statistical data in the context of hypotheses and beliefs derived from his basketball knowledge is very Bayesian, as is his comfort with accepting probabilistic answers to his questions.



But, judging from the description in the previous thirty pages, Voulgaris follows instinct, not fancy Bayesian math. Here, Silver seems to be using “Bayesian” not to mean the use of Bayes’s theorem but, rather, the general strategy of combining many different kinds of information.



To take another example, Silver discusses at length an important and troubling paper by John Ioannidis, “Why Most Published Research Findings Are False,” and leaves the reader with the impression that the problems that Ioannidis raises can be solved if statisticians use Bayesian approach rather than following Fisher. Silver writes:





[Fisher’s classical] methods discourage the researcher from considering the underlying context or plausibility of his hypothesis, something that the Bayesian method demands in the form of a prior probability. Thus, you will see apparently serious papers published on how toads can predict earthquakes… which apply frequentist tests to produce “statistically significant” but manifestly ridiculous findings.



But NASA’s 2011 study of toads was actually important and useful, not some “manifestly ridiculous” finding plucked from thin air. It was a thoughtful analysis of groundwater chemistry that began with a combination of naturalistic observation (a group of toads had abandoned a lake in Italy near the epicenter of an earthquake that happened a few days later) and theory (about ionospheric disturbance and water composition).



The real reason that too many published studies are false is not because lots of people are testing ridiculous things, which rarely happens in the top scientific journals; it’s because in any given year, drug companies and medical schools perform thousands of experiments. In any study, there is some small chance of a false positive; if you do a lot of experiments, you will eventually get a lot of false positive results (even putting aside self-deception, biases toward reporting positive results, and outright fraud)—as Silver himself actually explains two pages earlier. Switching to a Bayesian method of evaluating statistics will not fix the underlying problems; cleaning up science requires changes to the way in which scientific research is done and evaluated, not just a new formula.



It is perfectly reasonable for Silver to prefer the Bayesian approach—the field has remained split for nearly a century, with each side having its own arguments, innovations, and work-arounds—but the case for preferring Bayes to Fisher is far weaker than Silver lets on, and there is no reason whatsoever to think that a Bayesian approach is a “think differently” revolution. “The Signal and the Noise” is a terrific book, with much to admire. But it will take a lot more than Bayes’s very useful theorem to solve the many challenges in the world of applied statistics.



Gary Marcus and Ernest Davis are professors at New York University. Marcus has also written for newyorker.com about the facts and fictions of neuroscience, moral machines, the future of Web search and what needs to be done to clean up science.




Print is Here to Stay

ESSAYUpdated January 5, 2013, 12:25 a.m. ET

.Don't Burn Your Books—Print Is Here to Stay

The e-book had its moment, but sales are slowing. Readers still want to turn those crisp, bound pages.

By NICHOLAS CARR

Lovers of ink and paper, take heart. Reports of the death of the printed book may be exaggerated.


A 2012 survey revealed that just 16% of Americans have actually purchased an e-book.

.Ever since Amazon introduced its popular Kindle e-reader five years ago, pundits have assumed that the future of book publishing is digital. Opinions about the speed of the shift from page to screen have varied. But the consensus has been that digitization, having had its way with music and photographs and maps, would in due course have its way with books as well. By 2015, one media maven predicted a few years back, traditional books would be gone.



Half a decade into the e-book revolution, though, the prognosis for traditional books is suddenly looking brighter. Hardcover books are displaying surprising resiliency. The growth in e-book sales is slowing markedly. And purchases of e-readers are actually shrinking, as consumers opt instead for multipurpose tablets. It may be that e-books, rather than replacing printed books, will ultimately serve a role more like that of audio books—a complement to traditional reading, not a substitute.

.How attached are Americans to old-fashioned books? Just look at the results of a Pew Research Center survey released last month. The report showed that the percentage of adults who have read an e-book rose modestly over the past year, from 16% to 23%. But it also revealed that fully 89% of regular book readers said that they had read at least one printed book during the preceding 12 months. Only 30% reported reading even a single e-book in the past year.



What's more, the Association of American Publishers reported that the annual growth rate for e-book sales fell abruptly during 2012, to about 34%. That's still a healthy clip, but it is a sharp decline from the triple-digit growth rates of the preceding four years.



The initial e-book explosion is starting to look like an aberration. The technology's early adopters, a small but enthusiastic bunch, made the move to e-books quickly and in a concentrated period. Further converts will be harder to come by. A 2012 survey by Bowker Market Research revealed that just 16% of Americans have actually purchased an e-book and that a whopping 59% say they have "no interest" in buying one.



Meanwhile, the shift from e-readers to tablets may also be dampening e-book purchases. Sales of e-readers plunged 36% in 2012, according to estimates from IHS iSuppli, while tablet sales exploded. When forced to compete with the easy pleasures of games, videos and Facebook on devices like the iPad and the Kindle Fire, e-books lose a lot of their allure. The fact that an e-book can't be sold or given away after it's read also reduces the perceived value of the product.



Beyond the practical reasons for the decline in e-book growth, something deeper may be going on. We may have misjudged the nature of the electronic book.



From the start, e-book purchases have skewed disproportionately toward fiction, with novels representing close to two-thirds of sales. Digital best-seller lists are dominated in particular by genre novels, like thrillers and romances. Screen reading seems particularly well-suited to the kind of light entertainments that have traditionally been sold in supermarkets and airports as mass-market paperbacks.



These are, by design, the most disposable of books. We read them quickly and have no desire to hang onto them after we've turned the last page. We may even be a little embarrassed to be seen reading them, which makes anonymous digital versions all the more appealing. The "Fifty Shades of Grey" phenomenon probably wouldn't have happened if e-books didn't exist.



Readers of weightier fare, including literary fiction and narrative nonfiction, have been less inclined to go digital. They seem to prefer the heft and durability, the tactile pleasures, of what we still call "real books"—the kind you can set on a shelf.



E-books, in other words, may turn out to be just another format—an even lighter-weight, more disposable paperback. That would fit with the discovery that once people start buying digital books, they don't necessarily stop buying printed ones. In fact, according to Pew, nearly 90% of e-book readers continue to read physical volumes. The two forms seem to serve different purposes.



Having survived 500 years of technological upheaval, Gutenberg's invention may withstand the digital onslaught as well. There's something about a crisply printed, tightly bound book that we don't seem eager to let go of.



—Mr. Carr is the author of "The Shallows: What the Internet Is Doing to Our Brains."

Friday, January 25, 2013

Al Gore on the Internet

Close Al Gore on How the Internet is Changing the Way We Think

By Al Gore

In an excerpt from his new book, The Future, the Nobel Prize winner and former vice president talks global networks, Marshall McLuhan, and how computing is changing what it means to be human.


Technology and the "World Brain"



Writers have used the human nervous system to describe electronic communication since the invention of the telegraph. In 1851, only six years after Samuel Morse received the message "What hath God wrought?" Nathaniel Hawthorne wrote: "By means of electricity, the world of matter has become a great nerve vibrating thousands of miles in a breathless point of time. The round globe is a vast brain, instinct with intelligence." Less than a century later, H. G. Wells modified Hawthorne's metaphor when he offered a proposal to develop a "world brain" -- which he described as a commonwealth of all the world's information, accessible to all the world's people as "a sort of mental clearinghouse for the mind: a depot where knowledge and ideas are received, sorted, summarized, digested, clarified and compared." In the way Wells used the phrase "world brain," what began as a metaphor is now a reality. You can look it up right now on Wikipedia or search the World Wide Web on Google for some of the estimated one trillion web pages.



Since the nervous system connects to the human brain and the brain gives rise to the mind, it was understandable that one of the twentieth century's greatest theologians, Teilhard de Chardin, would modify Hawthorne's metaphor yet again. In the 1950s, he envisioned the "planetization" of consciousness within a technologically enabled network of human thoughts that he termed the "Global Mind." And while the current reality may not yet match Teilhard's expansive meaning when he used that provocative image, some technologists believe that what is emerging may nevertheless mark the beginning of an entirely new era. To paraphrase Descartes, "It thinks; therefore it is." [1]



The supercomputers and software in use have all been designed by human beings, but as Marshall McLuhan once said, "We shape our tools, and thereafter, our tools shape us." Since the global Internet and the billions of intelligent devices and machines connected to it---the Global Mind -- represent what is arguably far and away the most powerful tool that human beings have ever used, it should not be surprising that it is beginning to reshape the way we think in ways both trivial and profound -- but sweeping and ubiquitous.



In the same way that multinational corporations have become far more efficient and productive by outsourcing work to other countries and robosourcing work to intelligent, interconnected machines, we as individuals are becoming far more efficient and productive by instantly connecting our thoughts to computers, servers, and databases all over the world. Just as radical changes in the global economy have been driven by a positive feedback loop between outsourcing and robosourcing, the spread of computing power and the increasing number of people connected to the Internet are mutually reinforcing trends. Just as Earth Inc. is changing the role of human beings in the production process, the Global Mind is changing our relationship to the world of information.





The change being driven by the wholesale adoption of the Internet as the principal means of information exchange is simultaneously disruptive and creative. The futurist Kevin Kelly says that our new technological world -- infused with intelligence -- more and more resembles "a very complex organism that often follows its own urges." In this case, the large complex system includes not only the Internet and the computers, but also us.



Consider the impact on conversations. Many of us now routinely reach for smartphones to find the answers to questions that arise at the dinner table by searching the Internet with our fingertips. Indeed, many now spend so much time on their smartphones and other mobile Internet -- connected devices that oral conversation sometimes almost ceases. As a distinguished philosopher of the Internet, Sherry Turkle, recently wrote, we are spending more and more time "alone together."



The deeply engaging and immersive nature of online technologies has led many to ask whether their use might be addictive for some people. The Diagnostic and Statistical Manual of Mental Disorders (DSM), when it is updated in May 2013, will include "Internet Use Disorder" in its appendix for the first time, as a category targeted for further study. There are an estimated 500 million people in the world now playing online games at least one hour per day. In the United States, the average person under the age of twenty-one now spends almost as much time playing online games as they spend in classrooms from the sixth through twelfth grades. And it's not just young people: the average online social games player is a woman in her mid-forties. An estimated 55 percent of those playing social games in the U.S. -- and 60 percent in the U.K. -- are women. (Worldwide, women also generate 60 percent of the comments and post 70 percent of the pictures on Facebook.)



Of Memory, "Marks," and the Gutenberg Effect



Although these changes in behavior may seem trivial, the larger trend they illustrate is anything but. One of the most interesting debates among experts who study the relationship between people and the Internet is over how we may be adapting the internal organization of our brains -- and the nature of consciousness -- to the amount of time we are spending online.



Human memory has always been affected by each new advance in communications technology. Psychological studies have shown that when people are asked to remember a list of facts, those told in advance that the facts will later be retrievable on the Internet are not able to remember the list as well as a control group not informed that the facts could be found online. Similar studies have shown that regular users of GPS devices began to lose some of their innate sense of direction.



The implication is that many of us use the Internet -- and the devices, programs, and databases connected to it -- as an extension of our brains. This is not a metaphor; the studies indicate that it is a literal reallocation of mental energy. In a way, it makes sense to conserve our brain capacity by storing only the meager data that will allow us to retrieve facts from an external storage device. Or at least Albert Einstein thought so, once remarking: "Never memorize what you can look up in books."



For half a century neuroscientists have known that specific neuronal pathways grow and proliferate when used, while the disuse of neuron "trees" leads to their shrinkage and gradual loss of efficacy. Even before those discoveries, McLuhan described the process metaphorically, writing that when we adapt to a new tool that extends a function previously performed by the mind alone, we gradually lose touch with our former capacity because a "built-in numbing apparatus" subtly anesthetizes us to accommodate the attachment of a mental prosthetic connecting our brains seamlessly to the enhanced capacity inherent in the new tool.



In Plato's dialogues, when the Egyptian god Theuth tells one of the kings of Egypt, Thamus, that the new communications technology of the age -- writing -- would allow people to remember much more than previously, the king disagreed, saying, "It will implant forgetfulness in their souls: they will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks." [2]



So this dynamic is hardly new. What is profoundly different about the combination of Internet access and mobile personal computing devices is that the instantaneous connection between an individual's brain and the digital universe is so easy that a habitual reliance on external memory (or "exomemory") can become an extremely common behavior. The more common this behavior becomes, the greater one comes to rely on exomemory -- and the less one relies on memories stored in the brain itself. What becomes more important instead are the "external marks" referred to by Thamus 2,400 years ago. Indeed, one of the new measures of practical intelligence in the twenty-first century is the ease with which someone can quickly locate relevant information on the Internet.



Human consciousness has always been shaped by external creations. What makes human beings unique among, and dominant over, life-forms on Earth is our capacity for complex and abstract thought. Since the emergence of the neocortex in roughly its modern form around 200,000 years ago, however, the trajectory of human dominion over the Earth has been defined less by further developments in human physical evolution and more by the evolution of our relationship to the tools we have used to augment our leverage over reality.



Scientists disagree over whether the use of complex speech by humans emerged rather suddenly with a genetic mutation or whether it developed more gradually. But whatever its origin, complex speech radically changed the ability of humans to use information in gaining mastery over their circumstances by enabling us for the first time to communicate more intricate thoughts from one person to others. It also arguably represented the first example of the storing of information outside the human brain. And for most of human history, the spoken word was the principal "information technology" used in human societies.



The long hunter-gatherer period is associated with oral communication. The first use of written language is associated with the early stages of the Agricultural Revolution. The progressive development and use of more sophisticated tools for written language -- from stone tablets to papyrus to velum to paper, from pictograms to hieroglyphics to phonetic alphabets -- is associated with the emergence of complex civilizations in Mesopotamia, Egypt, China and India, the Mediterranean, and Central America.



The perfection by the ancient Greeks of the alphabet first devised by the Phoenicians led to a new way of thinking that explains the sudden explosion in Athens during the fourth and fifth centuries bce of philosophical discourse, dramatic theater, and the emergence of sophisticated concepts like democracy. Compared to hieroglyphics, pictographs, and cuneiform, the abstract shapes that made up the Greek alphabet -- like those that make up all modern Western alphabets -- have no more inherent meaning in themselves than the ones and zeros of digital code. But when they are arranged and rearranged in different combinations, they can be assigned gestalt meanings. The internal organization of the brain necessary to adapt to this new communications tool has been associated with the distinctive difference historians find in the civilization of ancient Greece compared to all of its predecessors.



The use of this new form of written communication led to an increased ability to store the collective wisdom of prior generations in a form that was external to the brain but nonetheless accessible. Later advances -- particularly the introduction of the printing press in the fourteenth century (in Asia) and the fifteenth century (in Europe) -- were also associated with a further expansion of the amount of knowledge stored externally and a further increase in the ease with which a much larger percentage of the population could gain access to it. With the introduction of print, the exponential curve that measures the complexity of human civilization suddenly bent upward at a sharply steeper angle. Our societies changed; our culture changed; our commerce changed; our politics changed.



Prior to the emergence of what McLuhan described as the Gutenberg Galaxy, most Europeans were illiterate. Their relative powerlessness was driven by their ignorance. Most libraries consisted of a few dozen hand-copied books, sometimes chained to the desks, written in a language that for the most part only the monks could understand. Access to the knowledge contained in these libraries was effectively restricted to the ruling elites in the feudal system, which wielded power in league with the medieval church, often by force of arms. The ability conferred by the printing press to capture, replicate, and distribute en masse the collected wisdom of preceding ages touched off the plethora of advances in information sharing that led to the modern world.



Less than two generations after Gutenberg's press came the Voyages of Discovery. When Columbus returned from the Bahamas, eleven print editions of the account of his journey captivated Europe. Within a quarter century sailing ships had circumnavigated the globe, bringing artifacts and knowledge from North, South, and Central America, Asia, and previously unknown parts of Africa.



In that same quarter century, the mass distribution of the Christian Bible in German and then other popular languages led to the Protestant Reformation (which was also fueled by Martin Luther's moral outrage over the print-empowered bubble in the market for indulgences, including the exciting new derivatives product: indulgences for sins yet to be committed). Luther's Ninety-Five Theses, nailed to the door of the church in Wittenberg in 1517, were written in Latin, but thousands of copies distributed to the public were printed in German. Within a decade, more than six million copies of various Reformation pamphlets had been printed, more than a quarter of them written by Luther himself.



The proliferation of texts in languages spoken by the average person triggered a series of mass adaptations to the new flow of information, beginning a wave of literacy that began in Northern Europe and moved southward. In France, as the wave began to crest, the printing press was denounced as "the work of the Devil." But as popular appetites grew for the seemingly limitless information that could be conveyed in the printed word, the ancient wisdom of the Greeks and Romans became accessible. The resulting explosion of thought and communication stimulated the emergence of a new way of thinking about the legacy of the past and the possibilities of the future.



The mass distribution of knowledge about the world of the present began to shake the foundations of the feudal order. The modern world that is now being transformed by kind rather than degree rose out of the ruins of the civilization that we might say was creatively destroyed by the printing press. The Scientific Revolution began less than a hundred years after Gutenberg's Bible, with the publication of Nicolaus Copernicus's Revolution of the Spheres (a copy of which he received fresh from the printer on his deathbed). Less than a century later Galileo confirmed heliocentrism. A few years after that came Descartes's "Clockwork Universe." And the race was on.



Challenges to the primacy of the medieval church and the feudal lords became challenges to the absolute rule of monarchs. Merchants and farmers began to ask why they could not exercise some form of self-determination based on the knowledge now available to them. A virtual "public square" emerged, within which ideas were exchanged by individuals. The Agora of ancient Athens and the Forum of the Roman Republic were physical places where the exchange of ideas took place, but the larger virtual forum created by the printing press mimicked important features of its predecessors in the ancient world.



Improvements to the printing press led to lower costs and the proliferation of printers looking for material to publish. Entry barriers were very low, both for obtaining the printed works of others and for contributing one's own thoughts. Soon the demand for knowledge led to modern works -- from Cervantes and Shakespeare to journals and then newspapers. Ideas that found resonance with large numbers of people attracted a larger audience still---in the manner of a Google search today.



In the Age of Enlightenment that ensued, knowledge and reason became a source of political power that rivaled wealth and force of arms. The possibility of self-governance within a framework of representative democracy was itself an outgrowth of this new public square created within the information ecosystem of the printing press. Individuals with the freedom to read and communicate with others could make decisions collectively and shape their own destiny.



At the beginning of January in 1776, Thomas Paine -- who had migrated from England to Philadelphia with no money, no family connections, and no source of influence other than an ability to express himself clearly in the printed word -- published Common Sense, the pamphlet that helped to ignite the American War of Independence that July. The theory of modern free market capitalism, codified by Adam Smith in the same year, operated according to the same underlying principles. Individuals with free access to information about markets could freely choose to buy or sell---and the aggregate of all their decisions would constitute an "invisible hand" to allocate resources, balance supply with demand, and set prices at an optimal level to maximize economic efficiency. It is fitting that the first volume of Gibbon's Decline and Fall of the Roman Empire was also published in the same year. Its runaway popularity was a counterpoint to the prevailing exhilaration about the future. The old order was truly gone; those of the present generation were busy making the world new again, with new ways of thinking and new institutions shaped by the print revolution.



It should not surprise us, then, that the Digital Revolution, which is sweeping the world much faster and more powerfully than the Print Revolution did in its time, is ushering in with it another wave of new societal, cultural, political, and commercial patterns that are beginning to make our world new yet again. As dramatic as the changes wrought by the Print Revolution were (and as were those wrought earlier by the introduction of complex speech, writing, and phonetic alphabets), none of these previous waves of change remotely compares with what we are now beginning to experience as a result of today's emergent combination of nearly ubiquitous computing and access to the Internet. Computers have been roughly doubling in processing power (per dollar spent) every eighteen to twenty-four months for the last half--century. This remarkable pattern -- which follows Moore's Law -- has continued in spite of periodic predictions that it would soon run its course. Though some experts believe that Moore's Law may now finally be expiring over the next decade, others believe that new advances such as quantum computing will lead to continued rapid increases in computing power.



Our societies, culture, politics, commerce, educational systems, ways of relating to one another -- and our ways of thinking -- are all being profoundly reorganized with the emergence of the Global Mind and the growth of digital information at exponential rates. The annual production and storage of digital data by companies and individuals is 60,000 times more than the total amount of information contained in the Library of Congress. By 2011, the amount of information created and replicated had grown by a factor of nine in just five years. (The amount of digital storage capacity did not surpass analog storage until 2002, but within only five years the percentage of information stored digitally grew to 94 percent of all stored information.) Two years earlier, the volume of data transmitted from mobile devices had already exceeded the total volume of all voice data transmitted. Not coincidentally, from 2003 to 2010, the average telephone call grew shorter by almost half, from three minutes to one minute and forty-seven seconds.



The number of people worldwide connected to the Internet doubled between 2005 and 2010 and in 2012 reached 2.4 billion users globally. By 2015, there will be as many mobile devices as there are people in the world. The number of mobile-only Internet users is expected to increase 56-fold over the next five years. Aggregate information flow using smartphones is projected to increase 47-fold over the same period. Smartphones already have captured more than half of the mobile phone market in the United States and many other developed countries.



But this is not just a phenomenon in wealthy countries. Although computers and tablets are still more concentrated in advanced nations, the reduction in the cost of computing power and the proliferation of smaller, more mobile computing devices is spreading access to the Global Mind throughout the world. More than 5 billion of the 7 billion people in the world now have access to mobile phones. In 2012, there were 1.1 billion active smartphone users worldwide -- still under one fifth of the global market. While smartphones capable of connecting to the Internet are still priced beyond the reach of the majority of people in developing countries, the same relentless cost reductions that have characterized the digital age since its inception are now driving the migration of smart features and Internet connectivity into affordable versions of low-end smartphones that will soon be nearly ubiquitous.



Already, the perceived value of being able to connect to the Internet has led to the labeling of Internet access as a new "human right" in a United Nations report. Nicholas Negroponte has led one of two competing global initiatives to provide an inexpensive ($100 to $140) computer or tablet to every child in the world who does not have one. This effort to close the "information gap" also follows a pattern that began in wealthy countries. For example, the United States dealt with concerns in the 1990s about a gap between "information haves" and "information have-nots" by passing a new law that subsidized the connection of every school and library to the Internet.



The behavioral changes driven by the digital revolution in developed countries also have at least some predictive value for the changes now in store for the world as a whole. According to a survey by Ericsson, 40 percent of smartphone owners connect to the Internet immediately upon awakening -- even before they get out of bed. And that kick--starts a behavioral pattern that extends throughout their waking hours. While they are driving to work in the morning, for example, they encounter one of the new hazards to public health and safety: the use of mobile communications devices by people who email, text, play games, and talk on the phone while simultaneously trying to operate their cars and trucks.



In one extreme example of this phenomenon, a commercial airliner flew ninety minutes past its scheduled destination because both the pilot and copilot were absorbed with their personal laptops in the cockpit, oblivious as more than twelve air traffic controllers in three different cities tried to get their attention -- and as the Strategic Air Command readied fighter jets to intercept the plane -- before the distracted pilots finally disengaged from their computers.



The popularity of the iPhone and the amount of time people communicate over its videoconferencing feature, FaceTime, has caused a few to actually modify the appearance of their faces in order to adapt to the new technology. Plastic surgeon Robert K. Sigal reported that "patients come in with their iPhones and show me how they look on FaceTime. The angle at which the phone is held, with the caller looking downward into the camera, really captures any heaviness, fullness and sagging of the face and neck. People say, 'I never knew I looked like that! I need to do something!' I've started calling it the 'FaceTime Facelift' effect. And we've developed procedures to specifically address it."





--------------------------------------------------------------------------------



[1] There is considerable debate and controversy over when--and even whether--artificial intelligence will reach a stage of development at which its ability to truly "think" is comparable to that of the human brain. The analysis presented in this chapter is based on the assumption that such a development is still speculative and will probably not arrive for several decades at the earliest. The disagreement over whether it will arrive at all requires a level of understanding about the nature of consciousness that scientists have not yet reached. Supercomputers have already demonstrated some capabilities that are far superior to those of human beings and are effectively making some important decisions for us already--handling high-frequency algorithmic trading on financial exchanges, for example--and discerning previously hidden complex relationships within very large amounts of data.



[2] The memory bank of the Internet is deteriorating through a process that Vint Cerf, a close friend who is often described as a "father of the Internet" (along with Robert Kahn, with whom he co--developed the TCP/IP protocol that allows computers and devices on the Internet to link with one another), calls "bit rot"--information disappears either because newer software can't read older, complex file formats or because the URL that the information is linked to is not renewed. Cerf calls for a "digital vellum"--a reliable and survivable medium to preserve the Internet's memory.



From the book THE FUTURE, by Al Gore, to be publishing by Random House this month. Copyright © 2013 by Albert Gore, Jr. Reprinted by arrangement with Random House. All rights reserved.







Library Quotes

“I have always imagined that Paradise will be a kind of library.”


― Jorge Luis Borges

"I declare after all there is no enjoyment like reading! How much sooner one tires of any thing than of a book! -- When I have a house of my own, I shall be miserable if I have not an excellent library.”


― Jane Austen, Pride and Prejudice

“I couldn't live a week without a private library - indeed, I'd part with all my furniture and squat and sleep on the floor before I'd let go of the 1500 or so books I possess.”


― H.P. Lovecraft




“Never lend books, for no one ever returns them; the only books I have in my library are books that other folks have lent me.”

― Anatole France

“If your library is not "unsafe," it probably isn't doing its job.”

― John Berry
“I attempted briefly to consecrate myself in the public library, believing every crack in my soul could be chinked with a book.”

― Barbara Kingsolver, The Poisonwood Bible

tags: books, library 157 people liked it like
“What a school thinks about its library is a measure of what it feels about education.”

― Harold Howe

“The only thing that you absolutely have to know, is the location of the library.”

― Albert Einstein

“The sea is nothing but a library of all the tears in history.”

― Lemony Snicket

“For him that stealeth,or borroweth and returneth not,this book from its owner, let it change into a serpent in his hand and rend him.

Let him be struck with palsy, and all his members blasted.

Let him languish in pain, crying aloud for mercy, and let there be no surcease to this agony till he sing in dissolution.

Let bookworms gnaw his entrails in token of the worm that dieth not, and when at last he goeth to his last punishment, let the flames of hell consume him for ever.



Curse on book thieves, from the monastery of San Pedro, Barcelona, Spain”

― Cornelia Funke, Inkheart
“When I open them, most of the books have the smell of an earlier time leaking out between the pages - a special odor of the knowledge and emotions that for ages have been calmly resting between the covers. Breathing it in, I glance through a few pages before returning each book to its shelf.”

― Haruki Murakami, Kafka on the Shore

“A library is like an island in the middle of a vast sea of ignorance, particularly if the library is very tall and the surrounding area has been flooded.”

― Daniel Handler

“What in the world would we do without our libraries?”

― Katharine Hepburn

“She'd absolutely adored the library_an entire building where anyone could take things they didn't own and feel no remorse about it.”

― Ally Carter, Heist Society

“She'd always been a little excitable, a little more passionate about books than your average person, but she was supposed to be -- she was a librarian, after all.”

― Sarah Beth Durst

“Come with me,' Mom says.

To the library.

Books and summertime

go together.”

― Lisa Schroeder, I Heart You, You Haunt Me

“In the library I felt better, words you could trust and look at till you understood them, they couldn't change half way through a sentence like people, so it was easier to spot a lie.”

― Jeanette Winterson, Oranges are Not the Only Fruit

“Few pleasures, for the true reader, rival the pleasure of browsing unhurriedly among books: old books, new books, library books, other people's books, one's own books - it does not matter whose or where. Simply to be among books, glancing at one here, reading a page from one over there, enjoying them all as objects to be touched, looked at, even smelt, is a deep satisfaction. And often, very often, while browsing haphazardly, looking for nothing in particular, you pick up a volume that suddenly excites you, and you know that this one of all the others you must read. Those are great moments - and the books we come across like that are often the most memorable.”

― Aidan Chambers




Krugman Spot On As Usual

Deficit Hawks DownBy PAUL KRUGMAN

Published: January 24, 2013
President Obama’s second Inaugural Address offered a lot for progressives to like. There was the spirited defense of gay rights; there was the equally spirited defense of the role of government, and, in particular, of the safety net provided by Medicare, Medicaid and Social Security. But arguably the most encouraging thing of all was what he didn’t say: He barely mentioned the budget deficit.

Mr. Obama’s clearly deliberate neglect of Washington’s favorite obsession was just the latest sign that the self-styled deficit hawks — better described as deficit scolds — are losing their hold over political discourse. And that’s a very good thing.



Why have the deficit scolds lost their grip? I’d suggest four interrelated reasons.



First, they have cried wolf too many times. They’ve spent three years warning of imminent crisis — if we don’t slash the deficit now now now, we’ll turn into Greece, Greece, I tell you. It is, for example, almost two years since Alan Simpson and Erskine Bowles declared that we should expect a fiscal crisis within, um, two years.



But that crisis keeps not happening. The still-depressed economy has kept interest rates at near-record lows despite large government borrowing, just as Keynesian economists predicted all along. So the credibility of the scolds has taken an understandable, and well-deserved, hit.



Second, both deficits and public spending as a share of G.D.P. have started to decline — again, just as those who never bought into the deficit hysteria predicted all along.



The truth is that the budget deficits of the past four years were mainly a temporary consequence of the financial crisis, which sent the economy into a tailspin — and which, therefore, led both to low tax receipts and to a rise in unemployment benefits and other government expenses. It should have been obvious that the deficit would come down as the economy recovered. But this point was hard to get across until deficit reduction started appearing in the data.



Now it has — and reasonable forecasts, like those of Jan Hatzius of Goldman Sachs, suggest that the federal deficit will be below 3 percent of G.D.P., a not very scary number, by 2015.



And it was, in fact, a good thing that the deficit was allowed to rise as the economy slumped. With private spending plunging as the housing bubble popped and cash-strapped families cut back, the willingness of the government to keep spending was one of the main reasons we didn’t experience a full replay of the Great Depression. Which brings me to the third reason the deficit scolds have lost influence: the contrary doctrine, the claim that we need to practice fiscal austerity even in a depressed economy, has failed decisively in practice.



Consider, in particular, the case of Britain. In 2010, when the new government of Prime Minister David Cameron turned to austerity policies, it received fulsome praise from many people on this side of the Atlantic. For example, the late David Broder urged President Obama to “do a Cameron”; he particularly commended Mr. Cameron for “brushing aside the warnings of economists that the sudden, severe medicine could cut short Britain’s economic recovery and throw the nation back into recession.”



Sure enough, the sudden, severe medicine cut short Britain’s economic recovery, and threw the nation back into recession.



At this point, then, it’s clear that the deficit-scold movement was based on bad economic analysis. But that’s not all: there was also clearly a lot of bad faith involved, as the scolds tried to exploit an economic (not fiscal) crisis on behalf of a political agenda that had nothing to do with deficits. And the growing transparency of that agenda is the fourth reason the deficit scolds have lost their clout.



What was it that finally pulled back the curtain here? Was it the way the election campaign revealed Representative Paul Ryan, who received a “fiscal responsibility” award from three leading deficit-scold organizations, as the con man he always was? Was it the decision of David Walker, alleged crusader for sound budgets, to endorse Mitt Romney and his budget-busting tax cuts for the rich? Or was it the brazenness of groups like Fix the Debt — basically corporate C.E.O.’s declaring that you should be forced to delay your retirement while they get to pay lower taxes?



The answer probably is, all of the above. In any case, an era has ended. Prominent deficit scolds can no longer count on being treated as if their wisdom, probity and public-spiritedness were beyond question. But what difference will that make?



Sad to say, G.O.P. control of the House means that we won’t do what we should be doing: spend more, not less, until the recovery is complete. But the fading of deficit hysteria means that the president can turn his focus to real problems. And that’s a move in the right direction

The Mind-Boggling Republicans

The mind-boggling tomfoolery of today's Republicna Party is easily taken apart for what it is: cut taxes on the rich while increasing taxes on the poor and middle-class.  All for the rich!   What amazes me is the stupidity of the middle-class who take people like Paul Ryan seriously. You people are simply stupid fools.

Monday, January 21, 2013

A Sustained Progressive Agenda

(IT SEEMS THAT LIBERALS DID NOT EXPECT SUCH A LIBERAL ADDRESS BY THE PRESIDENT)

Close Obama's Startling Second Inaugural

By James Fallows

This was the most sustainedly "progressive" statement Barack Obama has made in his decade on the national stage.

I was expecting an anodyne tone-poem about healing national wounds, surmounting partisanship, and so on. As has often been the case, Obama confounded expectations -- mine, at least. Four years ago, when people were expecting a barn-burner, the newly inaugurated president Obama gave a deliberately downbeat, sober-toned presentation about the long challenges ahead. Now -- well, it's almost as if he has won re-election and knows he will never have to run again and hears the clock ticking on his last chance to use the power of the presidency on the causes he cares about. If anyone were wondering whether Obama wanted to lower expectations for his second term ... no, he apparently does not.

Of course Obama established the second half of the speech, about voting rights and climate change and "not a nation of takers" and "Seneca Falls to Selma to Stonewall" [!] etc, with careful allusions through the first half of the speech to to our founding faiths -- and why doing things "together," the dominant word of the speech, has always been the American way.



More detailed parsing later, but this speech made news and alters politics in a way I had not anticipated.

Let's Move this Country Forward!

.

Obama’s Progressive Second Inaugural AddressBy Jonathan Chait Barack Obama’s first inaugural address was a high-minded paean to a better politics — “an end to the petty grievances and false promises.” His second was given over almost entirely to ends rather than means. And the analysis it contained struck a distinctly more combative tone.



The president dwelled at length on the founding vision of the United States — an idea that has animated the opposition, from right-wing protestors in colonial garb to conservatives claiming the Constitution as theirs. Again and again, and in pointed contrast to the tea party interpretation, Obama painted the animating principles of the United States as not merely limited government but a balance between freedom from government and the need for an effective government. “The patriots of 1776,” he said, “did not fight to replace the tyranny of a king with the privileges of a few or the rule of a mob.” He asserted that “preserving our individual freedoms ultimately requires collective action.”



Obama’s embrace of the progressive vision moved from the philosophical to the specific. He praised the role of income security programs (and assailed the Randian vision of the Republican Party), arguing that such programs “do not make us a nation of takers; they free us to take the risks that make this country great.” He pledged to address climate change, and he included gay rights firmly in the civil rights pantheon that has been woven into the national historic fabric.



At the end of the speech, Obama attempted to reconcile the lofty principles of his rhetoric with the grimy realities of politics. He concluded the speech with a Lincoln-esque (the movie, not the president) paean to compromise: “We must act, we must act knowing that our work will be imperfect.”



The Obama who begins his second term is much more acutely aware that the opposing party rejects, at the most philosophical level, the definition of the good that he has put forward as the national creed. Four years ago he expressed a jaunty confidence that the differences must be bridged. Today he committed himself to the same goal, but with a wariness borne of harsh experience.



Second Time Around

President Obama takes the ceremonial oath of office for the second time today.  Officially he took the oath yesterday.  Let us hope that he stands up to the Republicans and gets some things done in this second term.  You don't compromise with Republicans: you have to run over them.

Sunday, January 20, 2013

Stan the Man

I am saddened to hear of the passing of Stan Musial.  I grew up listening to the St. Louis Cardinals on KMOX in St. Louis, a clear channel, and this was before the Braves so the Cardinals were the South's team.  I remember 1963, his last year, and especially do I remember opening that pack of 1959 baseball cards and pulling out Stan Musial.  Oh, how I wish I had that baseball card!

Saturday, January 19, 2013

At the All-Nite Diner

Life gets serious when a man reaches his 60's. Gotta start slowing it down a bit. Do not ask for whom the bells tolls; the bell tolls just for the heck of it. I should be sitting at that all-night diner with Elvis & Marilyn talking old times. We could sing together and share our vulnerabilities. Don't be cruel to a heart that's true. Diamonds are a girl's best friend. Me? I'm just an old hound dog, but I'm laughing, not crying.


Thursday, January 17, 2013

Join the Circus?

The Ringling Brothers Barnum & Bailey Circus (have I got ir right?) is coming to town as they do every January. Should I run off & join the circus? This is my chance! But what would I do? I'm afraid of heights so I can't do the trapeze. Elephants scare me so I am not a candidate to ride one. So do tigers and lions and all big animals. I'm not funny so I can't be a clown. My voice is too weak to be an announcer. "LADIES AND GENTLEMEN. . . " I don't think so. Walk a rightrope? Oh please! I can barely keep my balance in normal walking. OH, well. I guess I'll never be in the circus.


Endless Lincoln Comments (2)

Thursday, Jan 17, 2013 06:45 AM CST


Just how factual is “Lincoln”?

Historians question whether Spielberg has made the 16th president into an unrealistic hero

By Daniel D'Addario

The debate over the facts in “Zero Dark Thirty” rages on, as the sources behind Kathryn Bigelow’s self-proclaimed work of cinematic journalism remain obscure. Fortunately for Steven Spielberg, his film has not been the target of media scrutiny. But while “Lincoln” arguably leads the field for the best picture Academy Award and is a huge financial hit, there are historians who believe the film paints a simplistic view of the Great Emancipator and the process of passing the 13th Amendment.



“It coheres, in some ways, very well, and, in some ways, not so well,” says Bruce Levine, a historian from the University of Illinois who just published the well-received Civil War history “The Fall of the House of Dixie.” ”I give them a mixed review.”



The film’s focus on a narrow period of history — from after the 1864 election to Abraham Lincoln’s assassination — necessarily overemphasizes Lincoln’s role in ending an institution that was nearing its death, said Levine. “There are fundamental gaps in ‘Lincoln.’ Watching the film, you don’t know that by the time of the events described, slavery is already badly undermined — slaves have been running away from their masters in border states and Confederate states even before fighting began.”



Indeed, Levine argues, the 13th Amendment fight that the film paints as a nail-biter was not really in doubt. “The 1864 election gave Republicans enough votes in Congress,” he said, “so that when the new Congress convened, the Republicans could have passed a constitutional amendment with their new Congress … historians are really loath to say anything is inevitable, but the passage of this amendment was almost a foregone conclusion.” Sure, Lincoln’s force of will in taking on slavery was admirable and instrumental in this instance — but slavery may indeed have been doomed to begin with, thanks to the abolitionist movement.



This rising tide — whereby the end was drawing near despite the film’s attempts to portray slavery as a thriving institution — bothered Columbia professor Eric Foner as well. Foner, who won a Pulitzer for his book “The Fiery Trial: Abraham Lincoln and American Slavery,” questioned the film’s inside-the-Beltway focus for restricting viewers’ understanding of just how quickly slavery was dying across the North and South. “You get no sense that the end of slavery is happening all over the place: In the South, for instance, with armies freeing slaves, and slaves rising up to take over plantations. It was a very chaotic moment and dramatic moment. This is the worst thing you can say about a Hollywood movie: It’s kind of boring compared to the drama in the nation.”



The film is based on a section of Doris Kearns Goodwin’s book “Team of Rivals” dealing with the end of slavery (albeit a short section: “She must have a very good lawyer to get her name on the screen,” cracked Foner). The Columbia professor believes “Lincoln” privileges psychological insight into Lincoln at the expense of a panoramic view of the national scene. “People read Doris’ book for the same reason they read Jane Austen — to get inside human character. They don’t read it for the full, big, chaotic end of slavery, with a lot of historical actors.”



Those actors include African-Americans, who are, as a New York Times Op-Ed argued, passive when not absent in “Lincoln”: There’s a brief opening scene in which black soldiers reverently recite the Gettysburg Address to the president and there’s Mary Todd Lincoln’s confidante Elizabeth Keckley, who is portrayed as a quietly strong seamstress, though in reality, she raised money for newly freed slaves. Said Foner: ”She was political, she was out there in the streets doing something. If you go to the other extreme, with ‘Django Unchained’: That’s a total fantasy, but at least it’s got black people as historical actors.”



“Lincoln” can best be understood, perhaps, as an example of Hollywood’s quest for truth, limiting its scope to known knowns — Lincoln did, in fact, preside over the rough-and-tumble passage of the 13th Amendment, and was indeed a forcefully charismatic and visionary individual — while eliding that which is more complicated. Peter S. Carmichael, the director of the Civil War Institute at Gettysburg College, praised the film for depicting the process of arm-twisting and vote-buying that went into passing the law. “Today, cynicism prevails because there’s no sense of history and how our system functions. When people confront corruption today, they think we somehow lost the purity of our government. ‘Lincoln’ should remind people that democracy is impure, hopelessly corrupt, but out of that can come powerful and inspiring change.”



And historians acknowledge that to vociferously fight Hollywood history would be an uphill battle, and not a very enjoyable one. “The historian is known as a killjoy,” said Foner. “People go to the movies and enjoy themselves and the historian comes along. One of the very good things about the movie is that it does put slavery right at the center. Nobody’s going to go away thinking the Civil War was about the tariff, or whatever.”



The context for the action of “Lincoln” — the cultural climate that made freeing all slaves possible and necessary — will remain, for now, the stuff of history books. “The existence of the movie is definitely for the good,” said Levine, “precisely because it has stirred up so much interest in this very important subject. I am happy for its existence, and I wish it had portrayed history more accurately.”

Tuesday, January 15, 2013

Bruce Levine - The Fall of the House of Dixie

This book chronicles the decline and fall of The South during the Civil War.  It is a sad story becaue the South brought destruction upon itself by seceeding from the Union.  I will be adding to this post.

One thing I will say unequivocally: Southern slaveholders before the war remind me of today's Republicans: dumb and stupid.

Friday, January 11, 2013

Endless Lincoln Comments

And I've only included a few of the more interesting comments on the Lincoln movie. FLH


.Tony Kushner's Real Source For "Lincoln"?
The Two Americas

Kushner Replies About Sources



DreamWorks Pictures Tony Kushner today received an Oscar nomination in the category of Best Adapted Screenplay for Lincoln. Lincoln is a superb film, and Kushner's script is (along with Daniel Day-Lewis's performance in the title role) the very best thing about it. He richly deserves the Oscar he will almost certainly win. But the nomination for "Best Adapted Screenplay" raises the question, "adapted from what"?



As has been widely noted, Lincoln isn't adapted in any meaningful way from its nominal source, Doris Kearns Goodwin's book, Team of Rivals, which despite its many virtues dedicates only a few pages to the film's central narrative--the passage of the 13th amendment to the Constitution. The claim that the film is based ("in part") on Team of Rivals mainly attests to the fact that Steven Spielberg purchased the rights to Goodwin's book before it was even, um, written. (This was in 1999. Goodwin was at the time a historical consultant to a multimedia event called The Unfinished Journey that Spielberg was preparing as part of Washington's millennium celebration. The deal was finally "inked," as they say in Hollywood, in 2001, four years before Team of Rivals was published.)



A good case can be made that Kushner's screenplay shouldn't be eligible for the "Best Adapted" category because it wasn't really "adapted" from anything. Kushner has made clear in interviews that he did quite a bit of independent research in preparing his script--enough so that the logical category to nominate it for would be "Best Original Screenplay." Goodwin probably helped him do the research; apparently she was involved in the project after the rights were purchased. So perhaps the screenplay credit should be "based in part on help from Doris Kearns Goodwin."



But that begs the question of what, if not Team of Rivals, is Lincoln's principal source. The answer is almost certainly Michael Vorenberg's Final Freedom: The Civil War, The Abolition of Slavery, and the Thirteenth Amendment, first published by Cambridge University Press in 2001. Vorenberg's book (which I made my TNR "best book of 2012" pick) is widely recognized as the definitive history of the political machinations involved in passing the 13th Amendment, a subject that received scant attention prior to publication of Final Freedom because most historians have tended to focus instead on the Emancipation Proclamation. Lincoln is known, after all, as the Great Emancipator, not the Great Manipulator of Congress Into Codifying the Executive-Branch Freeing of Slaves, Which Only Ever Applied To Confederate States, In A Constitutional Amendment That, After Lincoln's Death, Still Required Passage In State Legislatures. Vorenberg (who, incidentally, is an associate professor of history at Brown) writes in his introduction:



Historians have written much about the fate of African Americans after the Emancipation Proclamation, but they have not been so attentive to the process by which emancipation was written into law. In part, the inattention is a natural consequence of the compartmentalization of history. Because emancipation proved to be but one stage in the process by which enslaved African Americans became legal citizens, historians have been prone to move directly from the Emancipation Proclamation to the issue of legalized racial equality. In other words, historians have skipped quickly from the proclamation to the Fourteenth Amendment, ratified in 1868, which granted "due process of law" and "equal protection of the laws" to every American. Within this seamless narrative, the Thirteenth Amendment appears merely as a predictable epilogue to the Emancipation Procalamation or as an obligatory prologue to the Fourteenth Amendment.

We all know different now, thanks to Lincoln. But before Lincoln there was Final Freedom, a book rich in narrative detail that Kushner surely feasted on.



After I touted Vorenberg's book in TNR's year-end books roundup, he sent me a lovely letter. "If my book helped add accuracy to the film," Vorenberg wrote,



I can take some pleasure in that. Certainly there was plenty in the film that could only have come from my book. I wasn't sure whether to be flattered or shocked. I guess I'm a bit of both.

Also, your suspicion that my contribution to the film, whatever it may have been, was 'uncompensated' is accurate....

Vorenberg emphasized in a later e-mail that he feels in no way resentful about any use of his book for the film, and that by "shocked" he means "pleasantly surprised, not horrified." There was, he points out,



No reason for me to expect compensation or credit from the film makers. Films don't have to have footnotes, and it's hard to imagine how film makers could pay everyone who happens to have contributed to knowledge about a particular subject.

I asked Vorenberg for details in the film that "could only have come from my book." His reply:



I'll just give a few examples. With each one, by the way, the film makers took some understandable liberties with the facts. One episode involves the part of the film in which the Democrats try to bait Thaddeus Stevens into saying that the amendment grants "negro equality" in all respects, but Stevens sees through the gambit and responds that it grants only "equality before the law." The film does a nice job dramatizing that part of my book, though it leaves out Samuel "Sunset" Cox as Stevens's primary antagonist in that part of the debate. The omission of Cox is understandable (a film can do only so much) but too bad, as Cox was a pretty interesting character, and he played an important role throughout the amendment debate.

The film also uses my book for the specifics on how Alexander Coffroth's vote for the amendment was most likely secured--the promise of a resolution in his favor in an election controversy. Again, the film takes some liberties--there's no evidence that Lincoln or Stevens was involved in the discussions with Coffroth--but again, such liberties probably made for better film making.

The presence of African Americans in the congressional galleries at the final vote is a fact that most likely though not necessarily comes from my book....

Also, the film probably relied on my book to understand the way that the debate on the amendment was intertwined with the trip by Francis Blair, Sr. to Richmond to speak with Jefferson Davis and the ensuing trip north by the three Confederate envoys led by Alexander Stephens. The relationship between the so-called peace efforts and the amendment was complicated (I say "so-called" peace efforts because, despite the machinations and protestations on all sides, neither Abraham Lincoln nor Jefferson Davis thought that a peace could be negotiated at this point), probably more complicated than any film could depict. The film oversimplifies the connection between peace proposals and the amendment, but it does get at the important fact that these things were indeed connected.

Clearly Kushner is well within his rights to help himself to narrative details that he found in Final Freedom and elsewhere. Nobody owns history. But even if Vorenberg isn't troubled, I find it (on his behalf) a bit disappointing that neither Kushner nor Spielberg has acknowledged what a valuable resource they had in Final Freedom. (I've sent an e-mail to Kushner and will add an addendum if he replies.) Perhaps the lawyers told them they'd better not. And I find it downright galling that when you go to Lincoln's Web site and click on "publishing" you'll find links to Team of Rivals; to a movie-tie-in essay collection called Lincoln: A President For The Ages to which Vorenberg did not contribute; to a couple of movie-tie in books about the making of the film; and to Lincoln: How Abraham Lincoln Ended Slavery In America, a book that Lincoln historian Harold Holzer published a couple of months ago as yet another tie-in to the film. You'll find all these books, and that's terrific, let's hope lots of people read them. But you won't find a link to Final Freedom.



Vorenberg, incidentally, is (like me) a Lincoln fan. "I'm glad that it got so many Oscar nominations," he e-mailed me, "and I suspect it will have many Oscar victories. The renewed press may help get even more people to go see it. I'm not making any plans to retire on my book royalties, but I'm happy that an interesting and important chapter of history is getting out to a broader public."



Update, 3:15 p.m.: Slate just posted a good piece on all this by Aisha Harris. Here's what Harris got out of Kushner:



I emailed Kushner about his research, and he informed me that in the future he plans to create a comprehensive list of the many books and other documents he consulted during his six years of preparation. So as not to leave any names off the list, he declined to provide more details, but he did point out that his historical advisors Harold Holzer and James McPherson are also credited, and their respective books were extremely helpful to him.

What about Final Freedom? Kushner did not mention it specifically....