Saturday, May 29, 2010

Michael Shelden - Mark Twain: Man in White (2)

This is the most enjoyable book on Mark Twain that I've read. It covers the last few years of his tumultuous life, which were largely sad and stressful yet full of activity. The biography reads like a novel.

It was during this late period of his life that he started wearing his now famous white suit. His signature uniform was a late creation to draw attention to himself. Twain wanted to be remembered, and this was one of his tools to fashion his imaage for posterity. It worked.

The book details his work toward revising US copyright law (which worked, extending a writers control over his work for 56 years) to preserve his royalitites for his family, the death of his close friend Henry Rogers, and the tragic death of his daughter Jean in December of 1909. He created controversies that continue to this day: firing his associates Isabel Lyon and Ralph Ashcroft, creating writings that he dared not publish in his lifetime, and dictating long autobiographical writings that will finally be published later this year in the complete autobiography of Mark Twain.

I was sad when I finished the book. I wish it had gone on for another 400 pages. This Michael Shelden, who teaches at Indiana State, is a masterful writer.

I am slowly coming to an appreciation and understanding of Mark Twain.

Thursday, May 27, 2010

A Personal E-Book Library?

The Chapter And Verse on E-Bookstores

By GEOFFREY A. FOWLER

As books go digital, much of the focus has been on which gadgets offer the best approximation of old-fashioned paper and ink on a screen. But there's another choice that's just as important for readers to weigh before they make the leap to e-books: which e-bookstore to frequent.

Kindle device showing the Amazon e-book store
Reading devices like the iPad, Kindle and Nook will come and go, but you'll likely want your e-book collection to stick around. Yet unlike music, commercial e-books from the leading online stores come with restrictions that complicate your ability to move your collection from one device to the next. It's as if old-fashioned books were designed to fit on one particular style of bookshelves. What happens when you remodel?

Much of this problem stems from the publishing industry, which has demanded that e-bookstores embed digital rights management software in most best sellers to keep them from being stolen and swapped, free, online. The music labels once asked the same from digital-music retailers, but eventually agreed to open up.

The e-bookstores share in the blame. Amazon.com Inc., Apple Inc., Barnes & Noble Inc. and Sony Corp. all want you to buy their own gadgets and to continue buying e-books from their stores. For example, purchases from Apple's new iBooks store can be read only on Apple's own iPad (and soon the iPhone). Even though Apple said it would support an industry standard format called ePub for iBooks, in practice your iBooks purchases remain locked on Apple's virtual bookshelf. (So I hope iBooks customers like Apple's light-brown wood paneling.)


WSJ's Geoffrey Fowler finds it's not easy to read material purchased on one e-book on a rival's device. He tells Julia Angwin there's a solution, but there's also a catch.
Many of the biggest e-book providers fall short of putting readers fully in charge of their own digital-book collections, but they have begun to unveil their own solutions for moving your e-books around.

Amazon, which jump-started the shift to e-books with its Kindle, lets customers read its e-books through apps on at least six kinds of devices. Amazon custom-built the free apps for gadgets that include the iPhone, iPad, BlackBerry, PC, Mac and (later this summer) devices running Google's Android software. If a device has an Internet connection, the apps automatically load Amazon e-book purchases from the company's website, saving you the fuss of keeping track of files and transferring them between gadgets with cables. In many ways, this is more convenient than the way we manage our digital-music collections by manually adding and deleting files from iPods through a computer.

Amazon's apps are slick and work on many of the most popular devices today, but Amazon buyers should know that they're likely stuck using the retailer's software forever. While Amazon says it plans to keep making apps for more devices, the list of potential devices for reading grows longer every day. Moreover, Amazon sells its e-books in a proprietary format, so there's no way to open those files on another device without an Amazon app or without resorting to cumbersome (and potentially illegal) third-party conversion software.

Barnes & Noble, too, adopted an Internet-connected app approach, providing a seamless way to shift its e-books between the Nook, PC, Mac, BlackBerry, iPhone, WindowsMobile for the HTC HD2 and soon iPad. Barnes & Noble has been integrating its e-bookstore into niche e-reading devices, like those by Plastic Logic, Irex and Pandigital. It also, uniquely, offers you the chance to "loan" some e-book purchases to a friend for 14 days. But its bookstore requires a somewhat annoying step: Each time you download a book to a new device, you must enter your name and the credit-card number that was used to buy the book in order to unfasten the digital lock on the book.

Beyond the apps, Sony, Barnes & Noble and Apple and a few smaller e-bookstores all promised they'd put their weight behind the industry standard format ePub, which is the e-book version of music's Mp3 and can be read by almost every reading device (except the Kindle). That sounds great in theory, but in practice, the ePub files either can't be transferred or doing so is cumbersome.

The problem is each company adds digital rights management software to an ePub book. A copy of "Moby Dick" I bought from iBooks delivered just blank pages when I opened it on the Nook. A Barnes & Noble e-book produced an error message in Sony's PC ePub reading software. Barnes & Noble says its books will be compatible with devices like the Sony Reader after a software upgrade.

There were two notable exceptions: Purchases from Sony's e-bookstore and a Borders Group-backed store called Kobo could open on the Nook and other ePub-reading devices if I used a free program from Adobe called Digital Editions to transfer it. That's a nice insurance policy but the process is far more complicated than it should be.

For now, the e-bookstore choice comes down to which compromises readers are willing to accept. Anybody who just wants a simple way to carry digital books around might be happy with an app-based approach. But readers intent on building an e-library may want to either invest in an ePub-based collection, or hold off until the industry figures out a better solution.

Books Matter

Books in the home as important as parents’ education level.

Whether rich or poor, illiterate or college graduates, parents who have books in the home increase the level of education their children will attain, according to a 20-year study led by Associate Professor Mariah Evans.
Monday, May 24, 2010

By Claudene Wharton

Whether rich or poor, residents of the United States or China, illiterate or college graduates, parents who have books in the home increase the level of education their children will attain, according to a 20-year study led by Mariah Evans, University of Nevada, Reno associate professor of sociology and resource economics.

For years, educators have thought the strongest predictor of attaining high levels of education was having parents who were highly educated. But, strikingly, this massive study showed that the difference between being raised in a bookless home compared to being raised in a home with a 500-book library has as great an effect on the level of education a child will attain as having parents who are barely literate (3 years of education) compared to having parents who have a university education (15 or 16 years of education). Both factors, having a 500-book library or having university-educated parents, propel a child 3.2 years further in education, on average.

Being a sociologist, Evans was particularly interested to find that children of lesser-educated parents benefit the most from having books in the home. She has been looking for ways to help Nevada’s rural communities, in terms of economic development and education.

“What kinds of investments should we be making to help these kids get ahead?” she asked. “The results of this study indicate that getting some books into their homes is an inexpensive way that we can help these children succeed.”

Evans said, “Even a little bit goes a long way,” in terms of the number of books in a home. Having as few as 20 books in the home still has a significant impact on propelling a child to a higher level of education, and the more books you add, the greater the benefit.

“You get a lot of ‘bang for your book’,” she said. “It’s quite a good return-on-investment in a time of scarce resources.”

In some countries, such as China, having 500 or more books in the home propels children 6.6 years further in their education. In the United States, the effect is less, 2.4 years, than the 3.2-year average advantage experienced across all 27 countries in the study. But, Evans points out that 2.4 years is still a significant advantage in terms of educational attainment.

For example, according to the U.S. Census Bureau’s American Community Survey, Americans who have some college or an associate’s degree, but not a bachelor’s degree, earn an average of $7,213 more annually than those with just a high school education. Those who attain a bachelor’s degree earn $21,185 more each year, on average, than those with just high school diplomas.

The study by Evans and her colleagues at Nevada, UCLA and Australian National University is one of the largest and most comprehensive studies ever conducted on what influences the level of education a child will attain.

The researchers were struck by the strong effect having books in the home had on children’s educational attainment even above and beyond such factors as education level of the parents, the country’s GDP, the father’s occupation or the political system of the country.

Having books in the home is twice as important as the father’s education level, and more important than whether a child was reared in China or the United States. Surprisingly, the difference in educational attainment for children born in the United States and children born in China was just 2 years, less than two-thirds the effect that having 500 or more books in the home had on children (3.2 years).

The study, “Family scholarly culture and educational success: Books and schooling in 27 nations,” was published in the journal, Research in Social Stratification and Mobility (online at www.sciencedirect.com).

Monday, May 24, 2010

Twain's Autobiography

After keeping us waiting for a century, Mark Twain will finally reveal all.

The great American writer left instructions not to publish his autobiography until 100 years after his death, which is now.

By Guy Adams in Los Angeles


Sunday, 23 May 2010

Exactly a century after rumours of his death turned out to be entirely accurate, one of Mark Twain's dying wishes is at last coming true: an extensive, outspoken and revelatory autobiography which he devoted the last decade of his life to writing is finally going to be published.


The creator of Tom Sawyer, Huckleberry Finn and some of the most frequently misquoted catchphrases in the English language left behind 5,000 unedited pages of memoirs when he died in 1910, together with handwritten notes saying that he did not want them to hit bookshops for at least a century.

That milestone has now been reached, and in November the University of California, Berkeley, where the manuscript is in a vault, will release the first volume of Mark Twain's autobiography. The eventual trilogy will run to half a million words, and shed new light on the quintessentially American novelist.

Related articles
Nimrod's Shadow, By Chris Paling
More Arts Articles
Search the news archive for more stories
Scholars are divided as to why Twain wanted the first-hand account of his life kept under wraps for so long. Some believe it was because he wanted to talk freely about issues such as religion and politics. Others argue that the time lag prevented him from having to worry about offending friends.

One thing's for sure: by delaying publication, the author, who was fond of his celebrity status, has ensured that he'll be gossiped about during the 21st century. A section of the memoir will detail his little-known but scandalous relationship with Isabel Van Kleek Lyon, who became his secretary after the death of his wife Olivia in 1904. Twain was so close to Lyon that she once bought him an electric vibrating sex toy. But she was abruptly sacked in 1909, after the author claimed she had "hypnotised" him into giving her power of attorney over his estate.

Their ill-fated relationship will be recounted in full in a 400-page addendum, which Twain wrote during the last year of his life. It provides a remarkable account of how the dying novelist's final months were overshadowed by personal upheavals.

"Most people think Mark Twain was a sort of genteel Victorian. Well, in this document he calls her a slut and says she tried to seduce him. It's completely at odds with the impression most people have of him," says the historian Laura Trombley, who this year published a book about Lyon called Mark Twain's Other Woman.

"There is a perception that Twain spent his final years basking in the adoration of fans. The autobiography will perhaps show that it wasn't such a happy time. He spent six months of the last year of his life writing a manuscript full of vitriol, saying things that he'd never said about anyone in print before. It really is 400 pages of bile."

Twain, who was born Samuel Langhorne Clemens, had made several attempts to start work on autobiography, beginning in 1870, but only really hit his stride with the work in 1906, when he appointed a stenographer to transcribe his dictated reminiscences.

Another potential motivation for leaving the book to be posthumously published concerns Twain's legacy as a Great American. Michael Shelden, who this year published Man in White, an account of Twain's final years, says that some of his privately held views could have hurt his public image.

"He had doubts about God, and in the autobiography, he questions the imperial mission of the US in Cuba, Puerto Rico and the Philippines. He's also critical of [Theodore] Roosevelt, and takes the view that patriotism was the last refuge of the scoundrel. Twain also disliked sending Christian missionaries to Africa. He said they had enough business to be getting on with at home: with lynching going on in the South, he thought they should try to convert the heathens down there."

In other sections of the autobiography, Twain makes cruel observations about his supposed friends, acquaintances and one of his landladies.

Parts of the book have already seen the light of day in other publications. Small excerpts were run by US magazines before Twain's death (since he needed the money). His estate has allowed parts of it to be adapted for publication in three previous books described as "autobiographies".

However, Robert Hirst, who is leading the team at Berkeley editing the complete text, says that more than half of it has still never appeared in print. Only academics, biographers, and members of the public prepared to travel to the university's Bancroft research library have previously been able to read it in full. "When people ask me 'did Mark Twain really mean it to take 100 years for this to come out', I say 'he was certainly a man who knew how to make people want to buy a book'," Dr Hirst said.

November's publication is authorised by his estate, which in the absence of surviving descendants (a daughter, Clara, died in 1962, and a granddaughter Nina committed suicide in 1966) funds museums and libraries that preserve his legacy.

"There are so many biographies of Twain, and many of them have used bits and pieces of the autobiography," Dr Hirst said. "But biographers pick and choose what bits to quote. By publishing Twain's book in full, we hope that people will be able to come to their own complete conclusions about what sort of a man he was."

Birthday

Bob Dylan turns 69 today. Happy Birthday, Bob!

Friday, May 21, 2010

The War Is Making You Poor Act

BY Alan Grayson
21 May 2010

Next week, there is going to be a "debate" in Congress on yet another war funding bill. The bill is supposed to pass without debate, so no one will notice.

What George Orwell wrote about in "1984" has come true. What Eisenhower warned us about concerning the "military-industrial complex" has come true. War is a permanent feature of our societal landscape, so much so that no one notices it anymore.

But we’re going to change this. Today, we’re introducing a bill called ‘The War Is Making You Poor Act’. The purpose of this bill is to connect the dots, and to show people in a real and concrete way the cost of these endless wars.



Next year’s budget allocates $159,000,000,000 to perpetuate the occupations of Afghanistan and Iraq. That’s enough money to eliminate federal income taxes for the first $35,000 of every American’s income. Beyond that, leaves over $15 billion to cut the deficit.

And that’s what this bill does. It eliminates separate funding for the occupation of Iraq and Afghanistan, and eliminates federal income taxes for everyone’s first $35,000 of income ($70,000 for couples). Plus it pays down the national debt.

The costs of the war have been rendered invisible. There's no draft. Instead, we take the most vulnerable elements of our population, and give them a choice between unemployment and missile fodder. Government deficits conceal the need to pay in cash for the war.

We put the cost of both guns and butter on our Chinese credit card. In fact, we don't even put these wars on budget; they are still passed using 'emergency supplemental'. A nine-year ‘emergency’.

Let's show Congress the cost of these wars is too much for us.

Tell Congress that you like 'The War Is Making You Poor Act'. No, tell Congress you love it. Act now.

http://www.TheWarIsMakingYouPoor.com

All we are saying is "give peace a chance." We will end these wars.

Together.

Recent Historical Truth

by Paul Krugman


May 21, 2010, 4:23 pm
Parties Change
You know, if Rand Paul loses his Senate race, in a way I’ll be sorry. He’s been so much fun in such a short period of time!

Anyway, given the flap over his assertion that he wouldn’t support the Civil Rights Act of 1964, some Republicans are making the argument that they were the party of civil rights, while Democrats were the enemies. And there’s some truth to that: in the 1950s and early 1960s, the opponents of civil right were largely Southern Democrats.

But what happened to those Southern Democrats? They became Republicans. And I’m not just speaking metaphorically: many Republican members of Congress during the era of GOP dominance were, literally, former Democrats who switched parties.

The point is that today’s Democratic party is, effectively, the party of Lyndon Johnson, whose decision to push forward on civil rights cost the party the South, as he knew it would. Meanwhile, today’s Republican party is the party of Richard Nixon, who cynically exploited the backlash against civil rights to build a new majority.

So yes, let’s honor the great Republicans of yore; I’m a Lincoln man, myself. But let me tell you, George W. Bush was no Abraham Lincoln.

About Compromise

A Book Review

History
Compromised
by Michael Grunwald

At the Edge of the Precipice: Henry Clay and the Compromise that Saved the Union
by Robert V. Remini
Basic Books, 184 pp., $24

When you get caught in a “compromising position,” it’s a bad thing. When the independence of the judiciary or the security of your data gets “compromised,” that’s a bad thing, too. But compromise itself, especially political compromise, is widely considered a good thing. And the indefatigable veteran historian Robert V. Remini, author of an admirable biography of Henry Clay and now a more modest exploration of Clay’s Compromise of 1850, clearly considers it a great thing. “1850 proved that compromise is the best solution to difficult problems,” Remini explains in promotional material for his new book about Henry Clay and his famous stratagem. “That principle holds as true now as it did 160 years ago—something we’d do well to remember during the current rancorous debate over health care.” Remini approvingly quotes Clay during the nullification controversy of 1833, defending “that great principle of compromise and concession which lies at the bottom of our institutions.”

But compromise is not a principle. Compromise is a tactic—sometimes an honorable tactic, sometimes less so. It all depends on the terms of, and the reasons for, the deal. Doing a deal can be righteous, sometimes even more righteous than standing on principle, but the very fact of the deal getting done is not proof of its righteousness. This is also something we would do well to remember today.

The modern fetish for “bipartisanship” and “consensus” as principled ends in themselves reflects what might be called the Clay Fallacy, which is the preference for process over substance, the assumption that there always must be a right answer in the middle ground between the two sides. Actually, sometimes the right answer is entirely on one side. Sometimes it is somewhere else entirely. Sometimes no answer is the right answer. William Seward was silly to declare in 1850 that “I think all legislative compromises radically wrong and essentially vicious,” but it is just as silly to assume that any compromise that can command a veto-proof majority is a deal worth doing. And as the rancorous debate over health care reminded us, the belief that the reasonable must always find common ground tends to empower the intransigent and the unprincipled.

The Compromise of 1850 is an especially tricky case, because it was a compromise about slavery, forged without the participation or representation of America’s three million slaves. Clay’s deal abolished the slave trade in the District of Columbia and set the stage for California, New Mexico, and Utah to become free states, but it also protected slavery in the District and the slave trade throughout the South while strengthening fugitive slave laws nationwide. No wonder it made Seward uneasy. It must be said that Remini, a distinguished and scrupulously honest analyst of antebellum America, tends in this account to treat slavery as an abstraction, as if the whippings, beatings, and other cruelties of human bondage were just another divisive political issue to be debated and logrolled in Washington. Sure, it’s important to analyze history in context, and Clay did indeed hold unusually progressive views for a slaveholding politician of his era; but it is just as important to keep in the forefront of our minds that slavery was evil, and that there was nothing principled or statesmanlike about profiting from forced labor, and that the intransigent abolitionists who come off in most histories as nineteenth-century Code Pinkers were ultimately correct.

Still, Remini makes a persuasive case that the 1850 compromise made sense, if only as a stalling tactic that delayed secession long enough for the North to build its industrial base and elect Abraham Lincoln to the White House, so that it could win the Civil War. Millard Fillmore and James Buchanan were not going to get the job done. The long prelude to disunion does have the tragic feel of inevitability: Clay might have been able to drive a harder bargain with the slavocracy in the Missouri Compromise of 1820, when even the South still considered slavery temporary and unfortunate, but back then Clay and just about everyone else still believed the “peculiar institution” would melt away of its own accord.

By 1850, Southern leaders militantly defended slavery as not only necessary but good, and vigorously resisted any limits on its expansion. Clay was determined to preserve the Union at any cost, and along with Stephen Douglas, who shepherded his deal into law, he gets credit for keeping the cost lower than it could have been. So the dean of antebellum historians deserves the benefit of doubt about the great compromise.

Remini’s historian-crush on the Great Compromiser, and his unabashed enthusiasm for compromise itself, is a bit easier to question. For Remini, the most salient fact about Clay was his statesmanship: he was a legislative giant who set aside his beliefs to bridge deep national divisions. But that is a nicer way of saying that Clay’s beliefs were generally conditional; he flip-flopped on the national bank, finessed his views about expansion, and never let his anti-slavery ideals trump his passion for deals. Remini writes with such nostalgia about The Great Triumvirate of Clay, Calhoun and Webster that it becomes easy to forget that Calhoun was a two-faced treasonous creep, and that Webster was secretly on the U.S. Bank’s payroll, and that the entire triumvirate routinely placed presidential ambitions above principle. It is true that Clay and Webster were genuinely devoted to the Union—Calhoun was devoted to racism, states rights, and Calhoun—and that all three were dealmakers. But while Lincoln revered Clay and shared his patriotism, he also knew when to say, No deal. He recognized that the right answer did not always lie somewhere in the middle.

Which brings us back to that rancorous debate over health care. On one side was President Obama, whose plan extended private insurance to the uninsured while trying to rein in the cost of care. On the other side were the Republicans, who accused Obama of socializing medicine and cutting Medicare at the same time, raving about death panels and government takeovers and any other bogeymen they thought they could pin to the Democratic donkey. In the middle were opportunists who recognized that Republican intransigence gave them carte blanche to demand Cornhusker Kickbacks or executive orders about abortion for their votes. The whole debate was silly, featuring months of obviously fruitless negotiations. Republicans would vow to oppose any bill with a public option. So the public option would vanish. But the Republicans would still oppose the bill. And so it went.

This is what happens when the political culture decides that compromise is a worthy goal in its own right. Lefty blogs deride this phenomenon as “Broderism,” a snarky reference to Washington Post columnist-in-perpetuity David Broder’s predictable rapture over anything bipartisan, and they have a point. It is a recipe for obstructionism, because the minority party knows that any time it can unite in opposition the majority party has by definition failed. It leads to legislative discussions that are basically hostage negotiations rather than genuine searches for common ground. It promotes the triumph of process over substance. It is no accident that the most transparently ludicrous boondoggles in American government—farm subsidies, makework Army Corps of Engineers water projects, pork-stuffed highway bills, the mortgage interest deduction—are logrolled bipartisan boondoggles.

Washington lionizes dealmakers in much the same way Wall Street lionizes dealmakers, but the recent financial meltdown was a powerful reminder that the result of a deal is much more important than the fact of a deal. “It has been proven time and time again”, Remini observes, “that little of lasting importance can be accomplished without a willingness on the part of all involved to seek to accommodate one another’s needs and demands.” Well, mutual accommodation can be nice when it works. But it’s as true in the twenty-first century as it was in the nineteenth: there is a time to compromise and to cave, and a time to fight and to win.

Mark Twain Wisdom

"If books are not good company, where will I find it?"

-Mark Twain

The Souls of White Folks

Eddie Glaude, Jr., Ph.D.Professor of Religion and Chair of the Center for African American Studies at Princeton University
Posted: May 21, 2010 01:23 AM BIO Become a Fan Get Email Alerts Bloggers' Index



Rand Paul and the Souls of Some White Folks

I can only imagine that someone with no intimate knowledge of the humiliation of Jim Crow -- of having to go to the back door of a restaurant or simply being refused service because of the color of one's skin -- would find the recent comments of Rand Paul compelling.

Some will argue, as many have, that Paul's comments about Title II of the 1964 Civil Rights Act were consistent with his libertarian principles. His idea of freedom requires that he reject any governmental intrusion on the private lives of American citizens. So he, like others, finds racism repulsive, would march, if given the chance, beside Martin Luther King, Jr. against state-sanctioned segregation, but vehemently opposes any governmental effort to restrict the bigoted ugliness of those so thoroughly committed to white supremacy: it is their first amendment right, after all. Paul is content to protest government-sanctioned racism, but he fails to see government's role in ridding the nation of racism.

These sorts of white folk unsettle me. They seem to be blind to the suffering of others. They seem, at least to me, to be terribly selfish -- and dare to call that selfishness freedom, or to justify their own ugliness by an appeal to some abstract principle of states' rights. In the interim, those who are not considered "one of us" are left to suffer the ire and violence of bigots.

In short, Paul's principles offer little comfort to those bearing the brunt of this nation's racist past and present. In fact, they do just the opposite. They alert us, or at least me, to be ever mindful of the ugliness that always seems to linger beneath the surfaces of our democratic form of life -- an ugliness based in a troublesome conception of whiteness.

Some white folk are not too happy about the current direction of our nation. They want to take back "their" government. They don guns in public. They hurl invective at their opponents. They pass draconian immigration legislation. They ban ethnic studies in school districts. They insist on a view of the United States that mirrors their own self-conception: white and deeply conservative.

What is required of us when confronting such voices is a loud renunciation: we must reject the view of whiteness this approach to politics presupposes. And we do so in the name of democratic principles that are consistent with our commitment to justice.

Freedom-talk without justice-talk is empty and, potentially, dangerous. Paul and those like him would do well to remember this. Too many Americans, of all colors, have engaged in struggles to achieve our country in light of their view of "justice as a larger loyalty." That commitment has led many Americans to risk their lives to rid us of this insidious notion that ours is a "white" nation.

When I was relatively young, my parents moved us to a different neighborhood in our small town in Mississippi. I was playing with my new friend. He was white. Our Tonka trucks were yellow. His dad yelled, "Leave that nigger alone and get in this house." He abruptly stopped, picked up his truck, whispered goodbye, and left. I cried.

My hope and prayer is that the legacy of the Civil Rights Act of 1964, of government in the service of good, has allowed me to flourish and has also given room to that gentle whisper -- to that hushed act of solidarity -- to blossom as a profound commitment to justice and freedom for all.

A True Take on Rand Paul and Libertarianism

Libertarians like Rand Paul are political juveniles. They live in a science fiction world that is punctured when they periodically say stupid things and are called to account. It's time to stop taking these idiots seriously. One of their godfathers is Ayn Rand. (It just occured to me. I wonder if Rand Paul is named after Ayn Rand)


Libertarianism Friday, May 21, 2010 08:30 ET
War Room The lesson of Rand Paul: libertarianism is juvenile
By Gabriel Winant

APRand Paul blew up the Internet. Did you notice? Here's how it went down: first, he suggested unmistakably that he opposed the Civil Rights Act. Then he tried unsuccessfully to weasel his way out, under near-implacable questioning. This was when people got really worked up. So Paul put out a press release, the strategy of which was more or less to deny that the previous 24 hours had happened. In the meantime, those of us who hail from the Internet have lost the ability to talk about anything else.

Mainly, of course, we've been condemning and mocking Paul. But there's a group that’s lined up to defend him as well. The basic claim is that, while Paul was of course wrong to oppose civil rights legislation, it was an honest and respectable mistake. As Dave Weigel put it on Twitter (hence the weird sentence), "Rand doesn't mean harm, is suffering as old libertarian debate moves into prime time." (Weigel wrote a longer defense of Paul yesterday as well. For an excellent response, see this post from Salon editor Joan Walsh.)

Various figures who stand a few notches in toward the mainstream from Paul have made arguments similar to Weigel's. It was a mere theoretical fancy, they say, nothing should be made of it. A staffer for Sen. Jim DeMint, R-S.C., calls the whole thing "a non-issue." Thanks, white people, for clearing up that whole civil rights thing for everybody else. Not important!

Continue reading
But, lest Paul be allowed to escape, those of us who do want to make something of this need to broaden our argument. It's not just that he screwed up and said something stupid because he's so committed to a purist fancy. No, it's worse than that. Libertarianism itself is what's stupid here, not just Paul. We should stop tip-toeing around this belief system like its adherents are the noble last remnants of a dying breed, still clinging to their ancient, proud ways.

Now, to be clear, before continuing: there are legions of brilliant individual libertarians. Weigel himself, for example, is a great writer and reporter, and a true master of Twitter. We've never met, but by all accounts, he's also very much a stand-up fellow. But brilliant, decent people can think silly things. And that's what's going on here. It's time to stop taking libertarianism seriously.

Ironically, the best way into this point comes from another brilliant libertarian, legal scholar Richard Epstein. Says Epstein, "To be against Title II in 1964 would be to be brain-dead to the underlying realities of how this world works."

There’s the key -- "the underlying realities of how the world works." Because never, and I mean never, has there been capitalist enterprise that wasn't ultimately underwritten by the state. This is true at an obvious level that even most libertarians would concede (though maybe not some of the Austrian economists whom Rand Paul adores): for the system to work, you need some kind of bare bones apparatus for enforcing contracts and protecting property. But it's also true in a more profound, historical sense. To summarize very briefly a long and complicated process, we got capitalism in the first place through a long process of flirtation between governments on the one hand, and bankers and merchants on the other, culminating in the Industrial Revolution. What libertarians revere as an eternal, holy truth is in fact, in the grand scheme of human history, quite young. And if they'd just stop worshiping for a minute, they'd notice the parents hovering in the background.

Libertarians like Paul are walking around with the idea that the world could just snap back to a naturally-occurring benign order if the government stopped interfering. As Paul implied, good people wouldn't shop at the racist stores, so there wouldn't be any.

This is the belief system of people who have been the unwitting recipients of massive government backing for their entire lives. To borrow a phrase, they were born on third base, and think they hit a triple. We could fill a library with the details of the state underwriting enjoyed by American business -- hell, we could fill a fair chunk of the Internet, if we weren't using it all on Rand Paul already. And I don't just mean modern corporate welfare, or centuries-ago agricultural changes. Most left-of-center policymaking can fit into this category in one way or another.

Think about the New Deal. Although libertarian ingrates will never admit it, without the reforms of the 1930s, there might not be private property left for them to complain about the government infringing on. Not many capitalist democracies could survive 25 percent unemployment, and it doesn't just happen by good luck. Or, take a couple more recent examples: savvy health insurance executives were quite aware during this past year that, if reform failed again, skyrocketing prices were likely to doom the whole scheme of private insurance (itself a freak accident of federal policy) and bring on single-payer. Here's a fun sci-fi one: Imagine the moment in, say, twenty years, when the evidence of climate change has become undeniable, and there’s an urgent crackdown on carbon-intensive industries. Then coal companies and agribusiness will be wishing they’d gotten on board with the mild, slow-moving reform that is cap-and-trade.

Get it? The government didn't just help make the "free market" in the first place -- although it did do that. It's also constantly busy trimming around the edges, maintaining the thing, keeping it healthy. The state can think ahead and balance competing interests in a way that no single company can.

The libertarian who insists that the state has no place beyond basic night-watchman duties is like a teenager who, having been given a car, promptly starts demanding the right to stay out all night. Sometimes, someone else really is looking out for your best interests by saying no. (This isn't to say the state is looking out for the best interests of everybody, or even most people. The point is just that, however Glenn Beck might hyperventilate, the government doesn't want to destroy the market. It wants to preserve it, and it does this job better than the market can on its own.)

And that's why the best rap on libertarians isn't that they're racist, or selfish. (Though some of them are those things, and their beliefs encourage both bad behaviors, even if accidentally.) It's that they're thoroughly out of touch with reality. It's a worldview that prospers only so long as nobody tries it, and is too unreflective and self-absorbed to realize this. In other words, it's bratty. And that's bad enough.

Thursday, May 20, 2010

Reading Fail

For a book club I had planned to read Atlas Shrugged by Ayn Rand. I could not, however, muster the courage for all 1,100+ pages. How shameful.

Tuesday, May 18, 2010

The Republican Party Has Not Changed

I agree with Paul Krugman as I've tried to say in this blog many times before. The GOP hasn't changed at least since FDR and the New Deal. After all, Republicans hated FDR and his efforts to save capitalism by proper regulation. AFter all, Barry Goldwater received 39% of the popular vote in 1964. The Republican crazies have always been there. It took a bad economy, the first black President (the Republican Party is the party of the white Confederate South), the growing Hispanic population (the "other"), and the increasing influence of modern media (the internet and right wing talk radio) to bring the crazies totally out of the closet.


Going to Extreme
By PAUL KRUGMAN
Published: May 16, 2010
Utah Republicans have denied Robert Bennett, a very conservative three-term senator, a place on the ballot, because he’s not conservative enough. In Maine, party activists have pushed through a platform calling for, among other things, abolishing both the Federal Reserve and the Department of Education. And it’s becoming ever more apparent that real power within the G.O.P. rests with the ranting talk-show hosts.

News organizations have taken notice: suddenly, the takeover of the Republican Party by right-wing extremists has become a story (although many reporters seem determined to pretend that something equivalent is happening to the Democrats. It isn’t.) But why is this happening? And in particular, why is it happening now?

The right’s answer, of course, is that it’s about outrage over President Obama’s “socialist” policies — like his health care plan, which is, um, more or less identical to the plan Mitt Romney enacted in Massachusetts. Many on the left argue, instead, that it’s about race, the shock of having a black man in the White House — and there’s surely something to that.

But I’d like to offer two alternative hypotheses: First, Republican extremism was there all along — what’s changed is the willingness of the news media to acknowledge it. Second, to the extent that the power of the party’s extremists really is on the rise, it’s the economy, stupid.

On the first point: when I read reports by journalists who are shocked, shocked at the craziness of Maine’s Republicans, I wonder where they’ve been these past eight or so electoral cycles. For the truth is that the hard right has dominated the G.O.P. for many years. Indeed, the new Maine platform is if anything a bit milder than the Texas Republican platform of 2000, which called not just for eliminating the Federal Reserve but also for returning to the gold standard, for killing not just the Department of Education but also the Environmental Protection Agency, and more.

Somehow, though, the radicalism of Texas Republicans wasn’t a story in 2000, an election year in which George W. Bush of Texas, soon to become president, was widely portrayed as a moderate.

Or consider those talk-show hosts. Rush Limbaugh hasn’t changed: his recent suggestion that environmentalist terrorists might have caused the ecological disaster in the gulf is no worse than his repeated insinuations that Hillary Clinton might have been a party to murder. What’s changed is his respectability: news organizations are no longer as eager to downplay Mr. Limbaugh’s extremism as they were in 2002, when The Washington Post’s media critic insisted that the radio host’s critics were the ones who had “lost a couple of screws,” that he was a sensible “mainstream conservative” who talks “mainly about policy.”

So why has the reporting shifted? Maybe it was just deference to power: as long as America was widely perceived as being on the way to a permanent Republican majority, few were willing to call right-wing extremism by its proper name. Maybe it took a Democrat in the White House to give some observers the courage to say the obvious.

To be fair, however, it’s not all a matter of perception. Right-wing extremism may be the same as it ever was, but it clearly has more adherents now than it did a couple of years ago. Why? It may have a lot to do with a troubled economy.

True, that’s not how it was supposed to work. When the economy plunged into crisis, many observers — myself included — expected a political shift to the left. After all, the crisis made nonsense of the right’s markets-know-best, regulation-is-always-bad dogma. In retrospect, however, this was naïve: voters tend to react with their guts, not in response to analytical arguments — and in bad times, the gut reaction of many voters is to move right.

That’s the message of a recent paper by the economists Markus Brückner and Hans Peter Grüner, who find a striking correlation between economic performance and political extremism in advanced nations: in both America and Europe, periods of low economic growth tend to be associated with a rising vote for right-wing and nationalist political parties. The rise of the Tea Party, in other words, was exactly what we should have expected in the wake of the economic crisis.

So where does our political system go from here? Over the near term, a lot will depend on economic recovery. If the economy continues to add jobs, we can expect some of the air to go out of the Tea Party movement.

But don’t expect extremists to lose their grip on the G.O.P. anytime soon. What we’re seeing in places like Utah and Maine isn’t really a change in the party’s character: it has been dominated by extremists for a long time. The only thing that’s different now is that the rest of the country has finally noticed.

Michael Shelden - Mark Twain: Man in White

I move on to this biography of Mark Twain's final years.

Scott Turow - Innocent (2)

This sequel to PRESUMED INNOCENT, published in 1987, is not as good as the original, but so what. The novel PRESUMED INNOCENT is one of the great reading entertainments of my reading life. But this one is awfully entertaining as well.

Novels like this I call entertainments. They are not great literature and will not likely stand the test of time, but they are great reading treats, like summer movies, entertaining, but not something you'll be thinking about 20 years from now.

I don't read many legal thrillers. If I do, I check first with Scott Turow, a practicing attorney, who knows of what he writes in the legal world.

His books always have accurate and cutting-edge legalities. This latest is no exception.

These two books read back-to-back would be great fun if you haven't yet read them.

Sunday, May 16, 2010

Reading in a Digital Age

Reading in a Digital Age
Notes on why the novel and the Internet are opposites, and why the latter both undermines the former and makes it more necessary



By Sven Birkerts


The nature of transition, how change works its way through a system, how people acclimate to the new—all these questions. So much of the change is driven by technologies that are elusive if not altogether invisible in their operation. Signals, data, networks. New habits and reflexes. Watch older people as they try to retool; watch the ease with which kids who have nothing to unlearn go swimming forward. Study their movements, their aptitudes, their weaknesses. I wonder if any population in history has had a bigger gulf between its youngest and oldest members.


I ask my students about their reading habits, and though I’m not surprised to find that few read newspapers or print magazines, many check in with online news sources, aggregate sites, incessantly. They are seldom away from their screens for long, but that’s true of us, their parents, as well.

But how do we start to measure effects—of this and everything else? The outer look of things stays much the same, which is to say that the outer look of things has not caught up with the often intangible transformations. Newspapers are still sold and delivered; bookstores still pile their sale tables high. It is easy for the critic to be accused of alarmism. And yet . . .

Information comes to seem like an environment. If anything “important” happens anywhere, we will be informed. The effect of this is to pull the world in close. Nothing penetrates, or punctures. The real, which used to be defined by sensory immediacy, is redefined.

Saturday, May 15, 2010

Clay Shirky vs. Nicholas Carr

by Clay Shirky via RSS


I think Carr’s premises are correct: the mechanisms of media affect the nature of thought. The web presents us with unprecedented abundance. This can lead to interrupt-driven info-snacking, which robs people of the ability to find time to think about just one thing persistently. I also think that these changes are significant enough to motivate us to do something about it. I disagree, however, about what it is we should actually be doing.

Carr quotes Maryanne Wolf’s assertion that deep reading is indistinguishable from deep thinking. It’s hard to know what to make of this claim; there are a host of people, from mathematicians to jazz musicians, who practice kinds of deep thought that are perfectly distinguishable from deep reading. Similarly, there are many kinds of reading for which the internet has been a boon; it would be hard to argue that the last ten years have seen a decrease in either the availability or comprehension of material on scientific or technical subjects, for example.

But the anxiety at the heart of “Is Google Making Us Stupid?” doesn’t actually seem to be about thinking, or even reading, but culture.

Despite the sweep of the title, it’s focused on a very particular kind of reading, literary reading, as a metonym for a whole way of life. You can see this in Carr’s polling of “literary types,” in his quoting of Wolf and the playwright Richard Foreman, and in the reference to War and Peace, the only work mentioned by name. Now War and Peace isn’t just any piece of writing, of course; it is one of the longest novels in the canon, and symbolizes the height of literary ambition and of readerly devotion.

But here’s the thing: it’s not just Carr’s friend, and it’s not just because of the web—no one reads War and Peace. It’s too long, and not so interesting.

This observation is no less sacrilegious for being true. The reading public has increasingly decided that Tolstoy’s sacred work isn’t actually worth the time it takes to read it, but that process started long before the internet became mainstream. Much of the current concern about the internet, in fact, is a misdirected complaint about television, which displaced books as the essential medium by the 1970s.

As a consolation prize, though, litterateurs were allowed to retain their cultural status. Even as television came to dominate culture, we continued to reassure one another that War and Peace or À La Recherche du Temps Perdu were Very Important in some vague way. (This tension has produced an entire literature about the value of reading Proust that is now more widely read than Proust’s actual oeuvre.)

And now the internet has brought reading back as an activity. As Carr notes, “we may well be reading more today than we did in the 1970s or 1980s, when television was our medium of choice.” Well, yes. But because the return of reading has not brought about the return of the cultural icons we’d been emptily praising all these years, the enormity of the historical shift away from literary culture is now becoming clear.

And this, I think, is the real anxiety behind the essay: having lost its actual centrality some time ago, the literary world is now losing its normative hold on culture as well. The threat isn’t that people will stop reading War and Peace. That day is long since past. The threat is that people will stop genuflecting to the idea of reading War and Peace.

Carr quotes Richard Foreman, who rightly observes that the ‘complex, dense and “cathedral-like” structure of the highly educated and articulate personality’ is at risk. But I worked with Foreman in the early 90’s, when I was at another theater company down the block from his, and heard him make another relevant observation, in response to a question about why his plays weren’t “realistic.” The implication was that if his plays were wordy, abstract, and dense, it was because he was being intentionally difficult; his reply was that different themes require different forms and vice-versa, and that he didn’t write like Eugene O’Neill because he was working on different themes than O’Neill.

This link between form and theme is true of any medium. Making the net’s intellectual ethic as valuable as it can be will mean, among other things, securing for ourselves an ability to concentrate amidst our garden of ethereal delights. No matter how we solve that problem, though, it won’t bring back the cathedral-like model. On the network we have, the bazaar often works better than the cathedral, from the individual mind to the overall culture. Getting networked society right will mean producing the work whose themes best resonate on the net, just as getting the printing press right meant perfecting printed forms.

Carr is correct that there is cultural sacrifice in the transformation of the media landscape, but this is hardly the first time that has happened. The printing press sacrificed the monolithic, historic, and elite culture of Europe by promoting a diverse, contemporary, and vulgar one. That upstart literature has become the new high culture, and the challenge today comes, yet again, from the broadening of participation in both consumption and production of media.

Given this change, the question we need to be asking isn’t whether there is sacrifice; sacrifice is inevitable with serious change. The question we need to be asking is whether the sacrifice is worth it or, more importantly, what we can do to help make the sacrifice worth it. And the one strategy pretty much guaranteed not to improve anything is hoping that we’ll somehow turn the clock back. This will fail, while neither resuscitating the past nor improving the future.

This is what I find so puzzling about Carr. Unlike know-nothing critics of the medium, like Michael Gorman, Sven Birkerts, or Andrew Keen, Carr understands the net as well as anyone writing today. Yet his contrarian stance is slowly forcing him into a caricature of Luddism, increasingly unable to offer much of a suggestion for what to do next. A few years ago he could write, of Wikipedia, “Certainly, it’s useful—I regularly consult it to get a quick gloss on a subject.” Fast forward to the middle of 2008, and he is decrying not just Wikipedia, but Google, the Industrial Revolution, and even the invention of clocks. I doubt Carr thinks European society was actually better before widespread time-keeping (and therefore before the printing press), but even pseudo-Luddism is a waste of his intellect.

William Sayoran once remarked, “Everybody has got to die … but I have always believed an exception would be made in my case.” Luddism is a social version of that, where people are encouraged to believe that change is inevitable, except, perhaps, this time. This wish for stasis is bad for society, though not because it succeeds. The essential fact of Luddite complaint is that it only begins after a change has already taken place, so Luddites are mainly harmless whiners (except, of course, for the original Luddites, who were murderous thugs.) The real problem is elsewhere; Luddism is bad for society because it misdirects people’s energy and wastes their time.

The change we are in the middle of isn’t minor and it isn’t optional, but nor are its contours set in stone. We are a long way from discovering and perfecting the net’s native forms, what Barthes called the ‘genius’ particular to a medium. To get there, we must find ways to focus amid new intellectual abundance, but this is not a new challenge. Once the printing press meant that there were more books than a person could read in a lifetime, scholars had to sharpen disciplines and publishers define genres, as a bulwark against the information overload of the 16th century. Society was better after that transition than before, even though it took two hundred years to get there.

And now we’re facing a similar challenge, caused again by abundance, and taking it on will again mean altering our historic models for the summa bonum of educated life. It will be hard and complicated; abundance precipitates greater social change than scarcity. But our older habits of consumption weren’t virtuous, they were just a side-effect of living in an environment of impoverished access. Nostalgia for the accidental scarcity we’ve just emerged from is just a sideshow; the main event is trying to shape the greatest expansion of expressive capability the world has ever known.

* * *

Is the Internet Rotting our Brain?

Nicholas Carr has what sounds like a provacative book coming out in June on the effects of the internet. Here Laura Miller talks about the arguments in the book.

What to Read Sunday, May 9, 2010 19:01 ET
Laura Miller Yes, the Internet is rotting your brain
And Nicholas Carr's "The Shallows" has the evidence to prove it
By Laura Miller

Two years ago, Nicholas Carr, a technology writer, published an essay titled "Is Google Making Us Stupid?" in the Atlantic Monthly magazine. Despite being saddled with a grabby but not very accurate headline (the defendant was the Internet itself, not just its most popular search engine), the piece proved to be one of those rare texts that condense and articulate a fog of seemingly idiosyncratic worries into an urgently discussed issue in contemporary life.

It turned out that a whole lot of people were just then realizing that, like Carr, they had lost their ability to fully concentrate on long, thoughtful written works. "I get fidgety, lose the thread, begin looking for something else to do," Carr wrote. "I feel as if I'm always dragging my wayward brain back to the text. The deep reading that used to come naturally has become a struggle." At first assuming that his fractured mental state was the result of "middle-age mind rot," Carr eventually concluded that his heavy Internet usage was to blame. His article about this realization instantly rose to the top of the "most-read" list on the Atlantic's website and stayed there for months.

"The Shallows: What the Internet Is Doing to Our Brains" is Carr's new, book-length version of the Atlantic piece. It expands on the points he made in 2008, but it addresses some of the responses he got, as well. In addition to the usual moronic japes ("This article is too long!" -- can anyone really be witless enough to believe that joke is clever?), commenters, bloggers and pundits asked if Carr wasn't confusing the medium with how people choose to use it. Still others dared to argue that the value of what Carr calls "literary reading" has been inflated.

Continue reading
While "The Shallows" does contain significant chunks of the Atlantic essay, this isn't one of those all-too-familiar annoyances, the book that should have remained an article. In the brief period between the writing of the original piece and the publication of "The Shallows," neuroscientists have performed and reviewed important studies on the effects of multitasking, hyperlinks, multimedia and other information-age innovations on human brain function, all of which add empirical heft to Carr's arguments.

The results are not cheering, and the two chapters in which Carr details them are, to my mind, the book's payload. This evidence -- that even the microseconds of decision-making attention demanded by hyperlinks saps cognitive power from the reading process, that multiple sensory inputs severely degrade memory retention, that overloading the limited capacity of our short-term memory hampers our ability to lay down long-term memories -- is enough to make you want to run right out and buy Internet-blocking software.

Above all, Carr points to the past 20-some years of neurological research indicating that the human brain is, in the words of one scientific pioneer, "massively plastic" -- that is, much like our muscles, it can be substantially changed and developed by what we do with it. In a study that is quickly becoming as popular a touchstone as the Milgram experiment, the brains of London cab drivers were discovered to be much larger in the posterior hippocampus (the part of the brain devoted to spatial representations of one's surroundings) than was the case with a control group. These masses of neurons are the physiological manifestation of "the Knowledge," the cabbies' legendary mastery of the city's geography. The drivers' anterior hippocampus, which manages certain memory tasks, is correspondingly smaller. There's only so much space inside a skull, after all.

References to the cabbie study don't often mention this evidence that cognitive development may be a zero-sum game. The more of your brain you allocate to browsing, skimming, surfing and the incessant, low-grade decision-making characteristic of using the Web, the more puny and flaccid become the sectors devoted to "deep" thought. Furthermore, as Carr recently explained in a talk at the Los Angeles Times Festival of Books, distractibility is part of our genetic inheritance, a survival trait in the wild: "It's hard for us to pay attention," he said. "It goes against the native orientation of our minds."

Concentrated, linear thought doesn't come naturally to us, and the Web, with its countless spinning, dancing, blinking, multicolored and goodie-filled margins, tempts us away from it. (E-mail, that constant influx of the social acknowledgment craved by our monkey brains, may pose an even more potent diversion.) "It's possible to think deeply while surfing the Net," Carr writes, "but that's not the type of thinking the technology encourages or rewards." Instead, it tends to transform us into "lab rats constantly pressing levers to get tiny pellets of social or intellectual nourishment."

A good portion of "The Shallows" is devoted to persuading readers of the truth in Marshall McLuhan's famous pronouncement, "The medium is the message." This includes potted histories of such mind-altering "intellectual technologies" as the map, the clock and the printed book. To anyone moderately versed in this history, it may feel unnecessary, but the response to Carr's original article suggests that many people remain perilously sanguine about our ability to control the technology in our lives.

"The Shallows" certainly isn't the first examination of this subject, but it's more lucid, concise and pertinent than similar works by Winifred Gallagher and Sven Birkerts. Carr presents far more scientific material than those writers do, and avoids both the misty Spenglerian melancholia of Birkerts and Gallagher's muddled efforts to inject Buddhist spirituality into the debate.

What the book doesn't do, unfortunately, is offer a sufficient rejoinder to Carr's most puckish critics, people like Clay Shirky, who responded to one Web addict's complaint that he "can't read 'War and Peace' anymore" by proclaiming Tolstoy's epic novel to be "too long and not so interesting." While Shirky was no doubt playing the provocateur, he speaks for a very real anti-authoritarian cultural impulse to dismiss the judgments of experts, of history, even of a majority of other readers when they clash with the (often half-baked) evaluations of the individual. Shirky effectively asserted that, as far as Tolstoy is concerned, the emperor has no clothes -- at least not by the standards of today's multitasking digital natives. And why shouldn't their opinions be just as valid as anyone else's?

Carr sensibly replies that anyone who lacks the time or the cognitive "facility" to read a long novel like "War and Peace" will naturally find it too long and not so interesting. But in that case, how would we persuade such a person that it's worth learning how? For someone like Carr, the value of the intimate, intellectually nourishing practice of "literary reading" (and by extension, literary thinking) may be self-evident. Yet he's able to quote apparently intelligent and well-educated sources (including a Rhodes scholar who claims to never read books) who simply don't agree.

While "The Shallows" does present a good case for the richness of organic, biological memory over the crude information storage of digital media, I would have appreciated a more concerted effort to show the advantages of linear thinking over the scattered, skittering, browsing mind-set fostered by the Internet. What will we lose if (when?) this mode of thought passes into obscurity?

Carr and I (and perhaps you) may know that reading "War and Peace" can be a far more profound experience than navigating through a galaxy of up-to-date blog postings, but to someone who can't or won't believe this, what else can we point to as a consequence of the withering of such a skill? What will we lose socially, politically, civilly, scientifically, psychologically, if a majority decides that the intellectual "shallows" are the proper habitat for the 21st-century mind? This needs to be spelled out because as Carr's critics have demonstrated, fewer and fewer people take it for granted. But with that caveat, "The Shallows" remains an essential, accessible dispatch about how we think now.

Wednesday, May 12, 2010

Scott Turow - Innocent

Scott Turow did not invent the legal thriller when he published Presumed Innocent in 1987. However, this book is considered the first in a long line of legal thrillers (the genre made famous mostly by John Grisham) published since then. I remember fondly reading Presumed Innocent in 1989. Now at long last here is the sequel. I am enjoying it so far.

Sunday, May 9, 2010

David Remnick - The Bridge

David Remnick, editor of The New Yorker, has given us the definitive biography of Barack Obama to this point in time. This one book will tell you everything you need to know.

Barack Obama is truly a man who had to define himself. The son of a white mother from the Midwest and a Kenyan father who abandoned the family, raised largely by white grandparents, Obama had to study and research his family in order to figure out who he was. In this sense, I consider him something of an existentialist in that as much as anybody I've read about, he was able to define himself.

Remnick gives us in depth perspective on every aspect of Obama's life. We read about Kenyan history & politics to understand Barack Obama, Sr. We learn some anthropology in order to understand the mother, who did graduate work in anthropology. We learn about Indonesia because the future President spent time in that country living with his mother and her second husband. I could go on and on. The point is that perspective is everything for Remnick. To understand is to see the full picture.

This is what makes this book such a joy. You feel like you truly have some understanding of the man when you're done.

The central thread of this book is race. We all know that Obama is our first African American President. This book lays out the full historical significance of this fact.

For example, Remnick pays tribute to Jesse Jackson, who paved the way for Obama by running for President in l984 and l988. Had there been no Jackson, perhaps there would have been no Obama. But before Jackson, there was Shirley Jackson in 1972. All must be given their due.

Remnick points that slave labor largely built the White House and the Capitol, and that 12 American Presidents owned slaves. This is a sobering reminder of the racial fault line that lies thru all of American history.

To fully understand Barack Obama, you have to understand where he came from and how he defined himself. Here is the book that tells you these things.

Obama Criticizes the iPad, XBox etc.

HAMPTON, Va. -- President Barack Obama, addressing graduates at historically black Hampton University on Sunday, said that it is the responsibility of all Americans to offer every child the type of education that will make them competitive in an economy in which just a high school diploma is no longer enough.

Moreover, Obama said, the era of iPads and Xboxes had turned information into a diversion that was imposing new strains on democracy.

"You're coming of age in a 24/7 media environment that bombards us with all kinds of content and exposes us to all kinds of arguments, some of which don't always rank that high on the truth meter," he told the students. "And with iPods and iPads, and Xboxes and PlayStations -- none of which I know how to work -- information becomes a distraction, a diversion, a form of entertainment, rather than a tool of empowerment, rather than the means of emancipation. So all of this is not only putting pressure on you; it's putting new pressure on our country and on our democracy."

Friday, May 7, 2010

Trying to Keep Up

I try to keep up with what's going on in the world, but it's difficult. This country, yes, but the rest of the world is tough. I feel so parochial.

The election in Great Britain? A coalition government? Beats me what the ramifications could be.

The fiscal crisis in Greece? I don't understand the ramifications here either.

And Germany? I haven't understand Germany since Hitler.