Thursday, March 31, 2011

Reading So Far in 2011

My favorite book so far this year is the Mickey Mantle biography. His was such a tragic life. He could have accomplished so much good had he led a better life.

I have discovered the popular crime novelist Michael Connelly. I am enjoying the Mickey Haller character. "The Lincoln Lawyer" is the best one.

I am reading thru the novels of Charles Portis. His best is "Norwood" followed closely by "True Grit."

Edmund Morris completes his Teddy Roosevelt triology with "Colonel Roosevelt." Everyone seems to like Theodore Roosevelt, certainly our most intellectual President.

Wednesday, March 30, 2011

The New American Dream (for the rich)

The New American Dream
Tuesday 29 March 2011

by: William Rivers Pitt

If you are wealthy, you are living in the Golden Age of your American Dream, and it's a damned fine time to be alive. The two major political parties are working hammer and tong to bless you and keep you. The laws are being re-written - often by fiat, and in defiance of court orders - to strengthen the walls separating you and your wealth from the motley masses. Your stock portfolio, mostly made by and for oil and war, continues to swell. Your banks and Wall Street shops destroyed the economy for everyone except you, and not only did they get away with it, they were handed a vast dollop of taxpayer cash as a bonus prize.

The little people probably crack you up when you bother to think about them. Their version of the American Dream is a ragged blanket too short to cover them, but they still buy into it, and that's the secret of your strength in the end. So many of them walk into the voting booths and solemnly vote against their own best interests, and for yours, because the American Dream makes them think they, too, will be rich someday. They won't - you've made sure of that - but so long as they keep believing it, your money will continue to roll in.

The Citizens United Supreme Court decision swept away the last tattered shreds of the façade of fairness in politics and electioneering, and now you own the whole store. You can use your vast financial resources to lie on a national level now, lie with your bare face hanging out, because it works. You're not the bad guy in America. Teachers, cops, firefighters, union members and public-sector employees are the bad guys, the reason for all our economic woes. NPR and Planned Parenthood are the bad guys. You did that, and when governors like Scott Walker rampage through worker's rights on your dime, you chuckle into your sleeve and enjoy your interest rate.

We're firing teachers and missiles simultaneously, to poach a line from Jon Stewart, and the inherent disconnect fails to sink in among those serving as dray horses for your greed and ambition. They're in the traces, bellowing about what you want them to focus on thanks to your total control of the "mainstream" news media, and they plow your fields with the power of their incoherent, misdirected rage.

They pay their taxes. Isn't that a hoot? They pay their taxes dutifully and annually, and that money gets shunted right to you and your friends, thanks to the politicians who love you and the laws that favor you, not to mention the wars that sustain you. They pay their taxes when they should just pay you, right? Talk about getting rid of government waste. They should just pay you directly and cut out the middle man, because it all goes to the same place in the end. You.

You are General Electric, and you paid no taxes in 2010. You made $14.2 billion in worldwide profits, $5.1 billion of which was made in America, and your tax burden amounted to a big fat zero. In fact, you claimed a tax benefit of $3.2 billion, thanks to your anti-tax lobbying efforts in Washington and your use of offshore tax havens that protect and defend your profit margin.

You are ExxonMobil, and you paid no taxes in 2009. In fact, you got a $156 million return.

Independent journalism is important. Click here to get Truthout stories sent to your email.

You are Bank of America, and despite receiving a massive chunk of the taxpayer-funded bailout, despite recording a profit of $4.4 billion, you paid no taxes and received a $1.9 billion rebate.

You are Chevron, and you made $10 billion in 2009. You paid no taxes, and got a $19 million refund.

You are Citigroup, and you paid no taxes despite earning more than $4 billion, and despite getting a sizeable chunk of the taxpayer-funded bailout.

Your favorite part of it all?

The part that makes you laugh out loud?

It's when you hear the politicians you own talk about "shared sacrifices" and "fiscal responsibility." Man, that's a hoot. You watch them rave and froth on Capitol Hill about shutting down the government because the country doesn't have enough money to fund "entitlement programs" the little people have been paying into for decades. The very term - "entitlement" - cracks you up; how is it an entitlement if people paid for it? Nobody asks that question, of course. Nobody asks about cutting the bloated defense budget. Nobody asks where the billions diverted to Iraq and Afghanistan actually went, or where the money for Libya is going. For damned sure, nobody demands that you pony up and pay your fair share. You made sure of that, and the show goes on.

The United States of America has undergone a powerful transformation over the course of a single generation, and you are right up there in the catbird seat, watching it all unfold. For you, the New American Dream is "I got mine, kiss my ass, work and die (if you can find work, sucker), and pay me." For everyone else, the New American Dream is about simple survival, about running as fast as they can while going inexorably backwards.

Maybe you can even see the cancer eating away at the country that has treated you so royally, but you don't really care. You are safe and comfortable behind your gilded walls.

Saturday, March 26, 2011

Was the Civil War Necessary?

This is one of if not THE great question of American history. There was a generation of American historiams who said no, it was not necessary, it could have been avoided. The politicians of the Civil War era were called bumblers and were considered inept. This author is evidently a descendent of that generation of American historians. The author studied under Avery Craven, one of the leaders of that group who said the War was unnecessary. I disagree. Unfortunately this horrible war WAS necessary to achieve the equality of people of color in this country, a process still underway today.


By ANDREW DELBANCO
Published: March 25, 2011



AMERICA AFLAME

How the Civil War Created a Nation

By David Goldfield

“America Aflame,” David Goldfield’s account of the coming, conduct and consequences of the Civil War, is not a book about things that never happened. It is a riveting, often heartbreaking, narrative of things that did. Yet it also compels us to ponder choices not made, roads not taken — always with the implicit question in mind of whether the nation might somehow have spared itself the carnage of the war and, if so, what kind of nation it would have become.

At the outset of his masterly synthesis of political, social, economic and religious history, Goldfield tells us that he “is ­antiwar, particularly the Civil War.” Then he shows, in painfully vivid prose, young men marching into fields “fat with corn and deep green clover” only to be burned alive or torn by shrapnel, survivors left to breathe “in spurts, a frothy saliva dripping creamily from their mouths down to their ears, strings of matter from their brains swaying in the breeze,” or to die in their own blood and excrement or, if sufficiently alive to be carried off the field, to be treated by surgeons who, without knowledge of anesthesia or antisepsis, slice off mangled limbs with knives sharpened on “the soles of their boots.”

Many other books (one thinks of Charles Royster’s “Destructive War” and, more recently, of Drew Gilpin Faust’s “This Republic of Suffering”) have sought to convey, without glorifying or glossing it over, the battlefield truth of America’s four-year descent into organized savagery. What is distinctive about Goldfield’s book is that he believes the 600,000 deaths and countless mutilations could have been avoided. A war fought over the future of slavery did not have to happen because “the political system established by the founders would have been resilient and resourceful enough to accommodate our great diversity sooner without the tragedy of a civil war.” In advancing this thesis, Goldfield is returning to a view once held by eminent historians, including his teacher Avery Craven, that the war was an avertable catastrophe rather than, as Senator William Henry Seward of New York called it in advance, an “irrepressible conflict.”

In Goldfield’s telling, the force that drove the nation toward apocalypse was evangelical fervor of one form or another — in the North, faith in the righteousness of the abolitionist cause, in the South, faith in slavery as a guarantor of a threatened way of life. “Faith reinforced the romance of war” until “war had become a magic elixir to speed America’s millennial march” toward Armageddon.

But Goldfield’s belief that the “political system” could have solved the problem of slavery is a leap of faith of his own. Secessionists, after all, left the Union precisely because they rejected a constitutionally valid election that placed slavery, as Lincoln put it, “in the path of ultimate extinction.” In his first Inaugural Address, which Goldfield aptly calls “a walking-on-eggshells speech,” Lincoln tried to reassure slaveowners that he would not interfere with their peculiar institution where it already existed, but would only limit its expansion into territories over which the federal government held authority. But slaveowners did not concede the constitutional legitimacy of that authority — and the United States Supreme Court, in its notorious Dred Scott decision, had agreed with them.

Goldfield’s heroes are those who, in the face of this impasse, sought a solution short of secession — men like Alexander Stephens, a congressman from Georgia, later a reluctant vice president of the Confederacy, who was, in his words, “utterly opposed to mingling religion with politics,” and Stephen Douglas, a figure “of selfless patriotism and personal courage” who, recognizing his impending defeat in the election of 1860, campaigned through the South in an effort to save the Union and, after the attack on Fort Sumter, threw his support to Lincoln.

Andrew Delbanco, the editor of “The Portable Abraham Lincoln,” is the Levi professor in the humanities and the director of American studies at Columbia.


In the end, the war did put an end to legal human bondage in America. But emancipation came slowly — first as a military measure to deny the Confederacy the coerced manpower of its slaves, only later as a war against the institution itself once the valiant service of black soldiers had made the thought of restoring slavery after the fighting was over unthinkable.


According to Goldfield, the war reduced the North to a sort of postorgiastic exhaustion, leaving former slaves at the mercy of terrorist organizations like the Ku Klux Klan in a South determined to return them to subjugation. After a failed experiment in reconstruction on the basis of racial equality, some of the hottest antebellum abolitionists became apostates to their once-professed faith. Harriet Beecher Stowe’s “passion for the plight of the slave” gave way to a preoccupation with decorating houses. Horace Greeley, who had once goaded Lincoln to act more decisively against slavery, wondered if his own enmity to slavery “might have been a mistake.”

Lamenting the horrors of the war, Goldfield computes its total monetary cost at around $6.7 billion in 1860s currency, and asserts that if “the government had purchased the freedom of four million slaves and granted a 40-acre farm to each slave family, the total cost would have been $3.1 billion, leaving $3.6 billion for reparations to make up for a century of lost wages. And not a single life would have been lost.” But this computation proceeds from some dubious assumptions. Such a transaction can be made only if there is a willing seller as well as a willing buyer — and, as Goldfield himself notes, all attempts at compensated emancipation, even in the small border state of Delaware, where slaves were a minor part of the local economy, failed because slaveowners had no interest in such a deal. And even if they had, just where would the 40-acre farms be located? In the South? Or in the western territories, where abolitionist sentiment was often mixed with racist animus — a sentiment, that is, in favor of excluding black people, whether slave or free?

Throughout Goldfield’s book, one sees the present peeping through the past. In his allergy to the infusion of religion into politics, and his regret over the failure of government to achieve compromise, he sometimes seems to be writing as much about our own time as about time past. Yet even looking through his eyes, one finds it hard to imagine that the post-Civil War constitutional amendments by which black citizenship rights were advanced could ever have been ratified if the slave states had remained in the Union. The “secession war,” as Walt Whitman called it, would seem to have been a necessary prelude to the process of securing black equality — a process still unfinished ­today.

Despite its implausibilities, Goldfield’s thought experiment in alternative history is provocative in the best sense. Most history books try to explain the past. The exceptional ones, of which “America Aflame” is a distinguished example, remind us that the past is ultimately as inscrutable as the future.

Friday, March 25, 2011

3-D Mythology

by Alva Noe from NPR Facebook

March 25, 2011


A few months ago I wrote an essay on this blog about why 3-D movies were so bad. Some readers doubted my premise — that 3D movies are so bad — and one reader wrote to me that he thought I was a crotchety old relic (which I'm not).

But not everyone was negative. One man, a top executive at one of the leading 3-D conversion companies in Hollywood, sent me a note expressing his complete endorsement of everything I had written.

"The bottom line," he wrote, "is those naysayers that have posted comments and accused you of being too old, or ignoring progress, are themselves cloaked in ignorance of how '3-D' is not, and never will be a representation of how we see things in real life. It is in fact a gimmick, a visual slight of hand that tends to distract from the story."

For obvious reasons, he asked me to keep his identity a secret.

The introduction of 3-D technology can't be compared to that of sound, or color, or even stereo, as people like to do. And for a simple reason. We use these technologies to show more, to extend what can be depicted. These technologies enable us to increase the amount of information we can represent or put to work in film. And this is the stuff of story-telling.


Recall Marlon Brando's famous line, as Terry Malloy, in On the Waterfront, "I could have been a contendah!" You recall his facial expression, posture and movements, the line itself, the feeling with which it is delivered, but you also recall Brando's voice. You need sound to display the voice; you need sound for voice to be one of the elements in the composition making up the whole. Color similarly extends the working pallet of the director and so extends what can be presented to an audience.

We do not similarly extend the informational content of a movie when we add 3-D spatial effects. And for the simple reason that regular film already allows us to see the spatial relations between the actors and objects that make up the scene; 3-D doesn't change the pallet.

Consider: Right now I can see that my coffee table is nearer to me than the dining room table. And I can see that the window is off to the right. I can also see that the window is smaller than the doorway beside it. That is, I can visually experience the three-dimensional spatial relations among the things around me.

I can also put these spatial relations in words. That's what I did in the previous paragraph. I used words to capture spatial relations such as near/far, left/right, above/below, bigger/smaller, and so on. And I can also depict spatial relations such as these. It is possible to make drawings, paintings, or photographs, in which the spatial relations of depicted elements can be readily perceived.

None of this comes for free, by the way. Just as we need to learn the language and logic of spatial relations to describe them adequately in words, so we need to learn the methods of artificial perspective to make drawings that adequately depict spatiality. Similar issues confront the film maker; spatial and temporal coherence and continuity are the stuff of craft.

We get a better sense of what 3-D is by comparing its introduction not to that of sound and color, but rather to that of monster-sized buckets of popcorn and soda, or with big reclining oversized theater seats. People love popcorn and business-class seats at the movies. These greatly enhance, or at least alter, the movie-going experience. But neither has anything to do with film. And so it is with 3-D. It makes a qualitative change in the movie-going experience, no doubt. But one that has about as much to do with the movie as the seat you are sitting on or the candy you are eating. It's a gimmick. A special effect.

And boy, there is an effect. No doubt about it. Just how should we describe this effect? What sort of effect is it? We've already appreciated that it has nothing to do with the representation of spatial relations. 3-D does not stand to film as artificial perspective stands to painting. So what's going on?

What is sometimes claimed is that 3-D gives you a greater sense of really being there, of immersion in the scene. But this is obviously not true. Remember, in normal life we don't usually experience the world the way we experience a 3D movie. When was the last time you ducked and exclaimed "whoa!" when someone walked past you? When was the last time you felt a sense of dizzying motion when you looked around? If 3-D were really an immersion experience, then it would be an experience of the normal, of the humdrum, of our familiar bodily location in the world where we find ourselves. But that is decidedly not what 3-D is like. 3-D is thrilling, surprising, and slightly upsetting (in a thrilling, surprising kind of way).

What 3-D movies deliver is stereoscopic illusion — they manipulate where you focus and create a bizarre sense of pop-out and floating. They don't change the spatial relations you see; they change what it is like for you to experience those relations. They make them feel bizarre and they give you a thrill. For this reason, 3-D is not a step in the direction of virtual reality.

Children love a feeling of bizarre pop-out and floating. Movie lovers shouldn't.

More Montaigne (2)

For a Little Room Behind the Shop

Ian Brunskill

How to Live: A Life of Montaigne in One Question and Twenty Attempts at An Answer
by Sarah Bakewell
Chatto & Windus, 2010, 387 pp., $26.88

Can a retired 16th-century French provincial magistrate teach us how to live today? Sarah Bakewell’s engaging and idiosyncratic biography of the great essayist Michel de Montaigne suggests that the answer, in some quite subtle and interesting ways, is that he can. To judge by the enthusiastic reviews and healthy sales for Bakewell’s book since it was published in Britain early last year and this past October in the United States, many critics and readers would seem to agree.

The success of Bakewell’s How to Live: A Life of Montaigne in One Question and Twenty Attempts at an Answer is perhaps less surprising than it initially appears. It’s not hard to see how a writer whose main subject was himself might appeal to an age as marked by individual self-absorption as our own. Modern Western readers, apparently torn (or lurching endlessly back and forth) between crippling self-doubt and exaggerated self-belief, display an insatiable appetite for anything promoting what has come to be thought of as self-help. That explains, I suppose, why my own paper, the London Times, ushered in 2011 with a two-part special features section offering readers “quick boosts” for minds and bodies supposedly worn out by another year of just being alive. “New Year, New You”, it optimistically proclaimed.

I’m not sure Montaigne would entirely have grasped this sort of thing. The refreshed mind part, perhaps; the body boost I suspect not, especially not in some of its more elaborate contemporary forms. Bakewell, a British-Australian Jill of many trades turned serious writer, instructs us further that even some of Montaigne’s contemporaries and subsequent admirers were unsettled by his frank interest in his own bodily frailties and appetites. He wrote about things that many other writers preferred not to mention, so much so that Ralph Waldo Emerson, for one, while acknowledging his enthusiasm for Montaigne, nevertheless felt obliged to concede somewhat apologetically that “his French freedom runs into grossness.” I suspect, however, that Emerson’s gross 16th-century Frenchman would have found some of the early January delights on offer in the 21st-century Times (“threading and tweezing” for “perfectly groomed eyebrows . . . with minimum pain and fuss”) no less exotic than he famously found human cannibalism, if rather harder to understand.

Bakewell, cleverly, has nonetheless managed to tap into the booming modern market for such “quick boosts” of wisdom (not all of them by any means as harmless as tips on eyebrow shaping), while actually writing a serious biography of a serious thinker from an age less like our own that we might solipsistically think. She’s not the first to take on such a task, of course. Superior literary lessons for life have become an established sub-genre of the self-help boom: How to Win Friends and Influence Readers of the Paris Review. Thus books such as Alain de Botton’s How Proust Can Change Your Life or John Armstrong’s Love, Life, Goethe have explored this territory in their different ways. Bakewell’s life of Montaigne combines some of the merits of de Botton’s knowing, entertaining intellectual squib and Armstrong’s thorough and absorbing biographical study. If her work enjoys a popular resonance greater than theirs—and I think it may—it’s most likely a tribute to its subject, Montaigne.

Bakewell had it easy compared to de Botton and Armstrong. It took considerable wit and ingenuity (which de Botton has) to draw lessons in self-help from the work of Proust. Proust, after all, spent most of the last 14 years of his life in bed, and his forbidding, irreducible masterpiece, À la recherche du temps perdu, is of a length that might be expected to restrict its readership, in the unsympathetic words of Proust’s rather more robust brother Robert, to people who are either “very ill or have broken a leg.” Goethe, for all Armstrong’s assiduous and engrossing efforts, is a no less dauntingly improbable exemplar than Proust, albeit for almost opposite reasons: Far from being a reclusive or delicate figure, he engaged with the literary, political, philosophical, artistic, theatrical, legal and scientific worlds of his day with a polymathic energy and practical directness that few of his readers, then or now, could hope to match. He even built roads, managed forests and inspected mines. To his contemporaries, he was in his later years an impressively heroic figure, a kind of living national monument.

Not so Montaigne. Or not quite. He was a bestselling author in his lifetime, something like a celebrity, and he has been widely read and greatly admired pretty much ever since. But his enduring appeal rests on his remarkable ordinariness. Or so Montaigne himself might have us believe. Central to Montaigne’s life and writing was his insistence, as Bakewell reminds us, that “every man embodies the whole pattern of the human condition.” So in choosing to “set forth” his own “humble and inglorious life” he was not driven by vanity. Rather, he knew that “you can tie up all moral philosophy with a common and private life just as well as with a life of richer stuff.”

That more or less sums up his approach. In the three volumes of Essais which he wrote and rewrote between about 1572, when he was not quite forty, and his death twenty years later, he tackled subjects as diverse as death, friendship, cruelty, names, smells, coaches, thumbs and, of course, cannibalism. The matter of these essays—he intended the term in the sense of attempts or exercises, and may be said more or less to have invented the genre as he wrote some several decades before Francis Bacon—is often remote from the titles he gave them. In almost all of them he ranges far and wide from his starting point, digressing at will, often ending up in the most surprising places. The essay “Of Vanity”, for example, takes on household management and domestic building works, astrology, the pleasures and hazards of travel and the disadvantages of umbrellas. An essay “Upon Some Verses of Virgil” starts with musings on old age and considers the place of women in the world, but turns out to be mostly about sex. “I ramble indiscreetly and tumultuously”, Montaigne wrote, “my stile and my wit wander at the same rate.” Throughout he manages, not entirely without art, to give the impression of being ready to commit to paper his every thought as it occurs to him, however trivial, undignified or confused it may be, as if he wants to capture the very process of thinking itself.

In all this his purpose was not to reach definitive conclusions about the subjects under discussion, nor to solve or even rigorously to consider the moral or philosophical problems that might seem to be at issue in each case. He was skeptical about the possibility of doing any such thing, fascinated as he was by the limits of human knowledge and reasoning. “Que sais-je?” was his favourite question—“What do I know?” As he examined all aspects of the world around him, his real focus was elsewhere:


I turn my gaze inward. I fix it there and keep it busy. Everyone looks in front of him; as for me, I look inside of me; I have no business but with myself; I continually observe myself; I take stock of myself, I taste myself. . . . I roll about in myself.
Which brings us back, or at least sends us toward, our own day, does it not? Proust once observed how in a great work of literature we may be “delighted to find . . . those reflections of ours that we despised, joys and sorrows which we had repressed, a whole world of feeling we had scorned and whose value the book in which we discover them suddenly teaches us.” It’s a response that Montaigne, more than most, has prompted in his readers down the years. In the 17th century, for instance, Blaise Pascal, who disagreed with Montaigne and rather disapproved of him but who grappled obsessively with his work, wrote of the Essais, “It is not in Montaigne but in myself that I find everything I see there”—rather proving Montaigne’s method if not his point, whether deliberately or not it’s still hard to say. Two centuries later, Emerson concurred: “It seemed to me as if I had myself written the book, in some former life.”

Bakewell writes vividly of the ways in which each age has created its own Montaigne for, as she implies, every age seems to need one. The writer’s contemporaries tended to view his digressions and his personal frankness as mildly baffling lapses of concentration or taste and responded instead primarily to his wisdom, his exploration and distillation of Hellenistic philosophy, the way he used the lessons of the Stoics, Epicureans and Skeptics to illuminate the events and problems of his own day. Progressive 18th-century thinkers such as Rousseau and Voltaire welcomed his liberal views on education, his ethnographical interest in other cultures, his tolerance of other points of view. The 19th-century Romantics created a personality cult around him, making pilgrimages to the tower on his Bordeaux estate where he retired to write, seizing above all on his profound devotion to his best friend, the jurist and poet Étienne de La Boétie, and his passionate, grieving response to La Boétie’s early death. Nearer our own time, Virginia Woolf took heart (and novelistic inspiration) from the honesty of Montaigne’s personal self-scrutiny, his openness to new experience, his unceasing determination to write everything down.

And today? Bakewell presents a thoroughly contemporary Montaigne, undogmatically liberal in his moral and social views, radically modern (even postmodern) in his freewheeling approach to the writer’s art. But I think she knows that Montaigne was in some crucial ways rather less like us than all this might suggest.

The superficial similarities are certainly striking. His avowed interest in every aspect of his own life and character and their frank revelation in prose of sometimes improvisatory immediacy have (to Bakewell and others) suggested affinities with the world of blogs and social media today. It would be wrong, however, to push this too far. Montaigne’s literal self-centeredness has more in common with the self-portraits of the Renaissance painters who created the form (one element in an evolving complex of ideas about Man and his place in the universe), than with the compulsive exhibitionism of today’s Facebook or Twitter users. For Montaigne it’s a matter not of self-display to the world, but of self-discovery in the world and through engagement with it. Writing in the way he does is essential to that process, as he quietly contemplates the workings of his own mind. He has none of the blogger’s fear of silence or the desperate modern need to connect and communicate.

He enjoyed his own company, it is true. As a civil and civilized man, he hoped his readers might enjoy it too. But he wouldn’t depend on that or on them. After the early loss of a dear friend, and the deaths of most of his children in infancy, dependency of any kind held little appeal. If only by way of self-preservation, he thought, every man should create for himself une arrière-boutique, a little room all his own behind the shop. That’s what he did.

Seen from this perspective, his decision in his late thirties, after a near-death experience in a riding accident, to give up his administrative judicial duties in Bordeaux and withdraw to his country estate, to devote his remaining years to reading, writing, thinking and “rolling about in” himself, looks almost inevitable. The image of the gentleman scholar in his study was almost a Renaissance archetype, an ideal sometimes owing as much to notions of social distinction as to a passion for intellectual pursuits. In Montaigne’s case, however, it became—and remains—an exemplary, inspirational individual response to life in a troubled world.

Was Montaigne Happy?
And troubled his world most certainly was. Montaigne lived through a period when religious conflict between Catholic and Protestant divided France, and he lived in a region where that murderous conflict was always particularly likely to become acute. Montaigne was not untouched by the turbulence of the times. Nor, even in his elective retirement, did he refuse entirely to become involved in the public realm. Himself a practicing Catholic who saw no great need to apply his generally questioning turn of mind to matters of personal faith, he served two terms as mayor of Bordeaux, a task calling at that time for considerable talents as a religious and intercommunal peacemaker. His diplomatic skills were further evident in his role in the delicate behind-the-scenes manoeuvres that helped eventually to bring about the reign of Henri IV. Yet he makes relatively little reference in his essays to the momentous events of the day, let alone to his own part in them.

To an extent, it was probably his carefully cultivated detachment, as well as his rare openness of mind, that kept him safe even as France’s religious wars raged more or less literally outside his door. As Bakewell understands, but perhaps does not emphasize enough, that same detachment is central to any effort to claim him as a man of our time no less than of his own.

Armstrong’s book on Goethe, in its British edition, was subtitled “How to be happy in an imperfect world.” Bakewell might easily have used a similar phrase to sum up what she thinks we can learn from the life and work of Montaigne. That search for happiness puts her 16th-century essayist right back at the core of an odd and intriguing 21st-century debate—a debate by no means confined to the private sphere that all those newspaper features sections and self-help books address.

For a variety of reasons, some obvious, some less so, the study of happiness seems in the past few years to have become a staple of political debate, in both its practical and its philosophical aspects. Thomas Jefferson may have made its pursuit one of the inalienable fundamentals of American independence, but governments in the United States and elsewhere have not, on the whole, taken too much interest in the precise forms their citizens’ happiness might take. Until recently, that is.

A couple of years ago in these pages (July/August 2008) Kenneth Minogue pondered this phenomenon, reviewing a raft of books from the burgeoning field of “happiness studies.” The problem with them all, he concluded, was that their attempts to pin down how happy people might be, and to suggest ways in which such research might come to inform or influence the decisions of policymakers in the public sphere, were always likely to founder because “happiness is so elusive an idea. Even conceptually it is hard to bring into a focus.”

Of course Minogue was right, but that doesn’t seem to have stopped politicians and political thinkers wanting to try. Former Harvard president Derek Bok last year produced a study of The Politics of Happiness: What Government Can Learn from the New Research on Well-Being. Observing that increased prosperity has not necessarily brought increased happiness, he suggests that public officials might usefully revise some of their priorities. So far so good, though as soon as Bok moves on to specific proposals he runs into the difficulty highlighted by Minogue. Bok’s suggestions for promoting greater equality, better healthcare and so on would bring a warm feeling to any mildly paternalistic liberal, but I suspect that some of them might make at least a few readers of The American Interest very unhappy indeed.

Some public officials have nonetheless refused to be deterred. The tiny Himalayan kingdom of Bhutan famously makes policy on the basis of Gross National Happiness, a notion the kingdom’s former ruler formulated as long ago as 1972. (Some 27 years later, he lifted the national ban on watching television, which perhaps suggests that he was ultimately prepared to let his people become as dulled and troubled as everyone else.) Closer to my home, Tony Blair in the early years of his premiership liked to talk about the “quality of life” and to remind British voters that “money isn’t everything.” His young Conservative successor David Cameron, an enthusiast for such ideas when in Opposition, has gone further since taking office last year. In November he announced a plan to monitor British happiness. Quoting Robert Kennedy’s suggestion that conventional indicators of economic performance “measure everything . . . except that which makes life worthwhile”, he asked the nation’s Office of National Statistics to gather information on how people felt about all kinds of things, from their personal relationships to the state of the environment.

Closer to home for Montaigne, even as unsentimental a politician as President Nicolas Sarkozy has been thinking along similar lines. In 2009, he appointed a commission of twenty economists, among them Joseph Stiglitz and Amartya Sen, to come up with a broader index of Gallic well-being than GDP. This, however, did not stop the French daily Le Parisien from beginning 2011 with a survey of international optimism that placed France firmly at the bottom of the global contentment league as the world’s gloomiest nation. (How the French could possibly have bested the Hungarians is hard to understand.)

It’s not difficult to see why politicians might want to shift the focus away from economic growth at a time when achieving high scores in that conventional measure of national happiness seems so elusive. There may well be some interesting long-term ramifications in such areas as education, healthcare, environmental policy, perhaps even in approaches to taxation. But the central problem of classical utilitarianism—happiness for whom, exactly?—will probably continue to bedevil any attempt to legislate on the basis of broader notions of well-being.

Where does Montaigne come into this? Well, as Minogue pointed out, the interesting thing about Jefferson’s formulation was that it was not quite as simple as it sounds. It involves “a logical trick”, in that


pursuing happiness is not like pursuing women, or works of art, or causes to embrace. It is, instead, a formal word referring to a non-pursuable characteristic of the satisfactions we find in achieving success in any of the very many projects by which we try to fulfil ourselves. The trick lies in the fact that we achieve it only by turning our gaze away from happiness itself, and concentrating on some concrete particular.
It’s a trick that Montaigne pulled off, again and again. The happiness he pursued was not the personal pleasure of utilitarian thought, let alone the “quick boosts” and easy (if esoteric) gratifications of modern self-help. His goal, as Bakewell reminds us, was the eudaimonia of Greek philosophy, an altogether fuller conception of human flourishing and joy. And he attained it by not seeking it. He focused, to borrow Minogue’s phrase, not on happiness itself but on concrete particulars, bringing to their contemplation what Bakewell describes as another “little trick” taken from the Greeks: ataraxia, which might be rendered equanimity or imperturbability. The result could be described in Montaigne’s case as a productively detached kind of engagement with life.


Are there really any lessons for us here? Montaigne would have said not. “Je n’enseigne point”, he wrote. “Je raconte.” He himself succeeded in carrying his thinking, his pursuit of happiness, over into the public sphere in ways that might be difficult to translate into 21st-century Western society.

There may be a lesson, nonetheless. At the very least, Montaigne’s example offers a valuable counterpoint to a media-driven, mediated modern culture that blurs the distinctions between public and private spaces, and public and private selves, and in which constant communication seems sometimes to mean no more than unceasing noise. Montaigne was happy in a way that no blogger ever could be. There is, in the end, something to be said for the little room behind the shop.

Obama Like Ike?

Yes, Obama leads the country like Eisenhower.



POLITICAL CONNECTIONS
Like Ike
Barack Obama consistently takes an offstage approach to presidential leadership. Is it serving him well?
Thursday, March 24, 2011 | 2:51 p.m.

President Obama and former President Dwight D. Eisenhower

In 2008, many of Barack Obama’s supporters thought they might be electing another John F. Kennedy. But his recent maneuvers increasingly suggest that they selected another Dwight Eisenhower.

That’s not a comment on President Obama’s effectiveness or ideology, but rather on his conception of presidential leadership. Whether he is confronting the turmoil reshaping the Middle East or the escalating budget wars in Washington, Obama most often uses a common set of strategies to pursue his goals. Those strategies have less in common with Kennedy’s inspirational, public-oriented leadership than with the muted, indirect, and targeted Eisenhower model that political scientist Fred Greenstein memorably described as a “hidden hand” presidency.

This approach has allowed Obama to achieve many of his domestic and international aims—from passing the health reform legislation that marked its stormy first anniversary this week to encouraging Egypt’s peaceful transfer of power. But, like it did for Eisenhower, this style has exposed Obama to charges of passivity, indecisiveness, and leading from behind. The pattern has left even some of his supporters uncertain whether he is shrewd—or timid.

On most issues, Obama has consciously chosen not to make himself the fulcrum. He has identified broad goals but has generally allowed others to take the public lead, waited until the debate has substantially coalesced, and only then announced a clear, visible stand meant to solidify consensus. He appears to believe he can most often exert maximum leverage toward the end of any process—an implicit rejection of the belief that a president’s greatest asset is his ability to define the choices for the country (and the world).

To the extent that Obama shapes processes along the way, he tends to do so offstage rather than in public. Throughout, he has shown an unswerving resistance to absolutist public pronouncements and grand theories. “The modus operandi is quiet, behind-the-scenes consensus-building, rather than out-front, bold leadership,” said Ken Duberstein, a former chief of staff for Ronald Reagan.

All of these instincts are apparent in Obama’s response to the Middle East tumult. He has approached each uprising as a blank slate that demands new assessments and recalibrated policies: Even in the deserts of the Middle East, he resists drawing lines in the sand. Bahrain, an ally, receives quiet exhortation. In Libya, Obama speaks with cruise missiles. “Each of these cases presents a different set of circumstances,” a senior national-security official insists. “The distinction between this and the previous administration is, we’re not trying to sweep this all into one grand, oversimplified theory … without understanding the context.”

A common thread throughout Obama’s responses has been his belief that the U.S. image across the region is so toxic that it could undermine the change it seeks by embracing it too closely. “We can’t have this be our agenda,” the senior official says. In Egypt, Obama deferred to local protesters; in Libya, he allowed France and England to drive the international debate toward military intervention—and only publicly joined them once the Arab League had signed on.

By stepping back, Obama has effectively denied the region’s autocrats the opportunity to discredit indigenous demands for change as a U.S. plot. But this strategy has led to delay, mixed messages, and his unilateral renunciation of the weapon of ringing rhetorical inspiration: There’s been no Kennedyesque “Ich bin ein Berliner” moment for Obama.

The president has shown similar instincts on domestic issues, especially since Republicans captured the House. On health care reform last year, he prodded the process but mostly let Democratic congressional leaders direct the internal party negotiations. Today, Obama has remained aloof from a bipartisan Senate group laboring to convert the recommendations of his deficit-reduction commission into legislation. Many around that group (including commission Cochairmen Alan Simpson and Erskine Bowles) believe that the president may still endorse the effort, but only if it first garners broad bipartisan support.

Obama’s case for delaying intervention into the deficit discussion parallels the administration’s logic about the Middle East strategy: Because the domestic debate is so polarized, Republicans might feel compelled to oppose the Simpson-Bowles plan if Obama preemptively adopted it. By reducing his profile upfront, he can broaden his coalition in the end.

That logic is probably right but hardly cost-free. This week, a large bipartisan Senate group warned the president that no deficit agreement may get far enough for him to bless unless he moves more aggressively to build public support for action. Even if a plan emerges, by delaying his involvement, Obama risks being forced to choose among options largely defined by others.

In foreign policy as well, the most pointed criticism of Obama’s style is that it leaves him reacting to events rather than shaping them—and, frequently, reacting only after costly hesitation. The president’s approach carries another big cost: His desire to maintain flexibility for private deal-making often dulls his ability to mobilize popular support by drawing clear contrasts. (See: health care.)

Yet at home and abroad, Obama consistently achieves many of his goals. (See: again, health care.) Can a “hidden hand” presidency thrive in the 24/7 information maelstrom? Obama is testing the proposition.

Thursday, March 24, 2011

Michael Connelly - The Lincoln Lawyer (2)

I finish this book. Now I am ready to see the movie this weekend.

This book introduces Mickey Haller, defense lawyer. The plot is less interesting than the character and the way the book presents the world of the criminal defense lawyer. If accurate in this novel, the criminal defense lawyer can be a lone ranger, fighting for the underdog, rah, rah, rah. Sure, it's fiction and pollyannish, but I like it.

Prosecutors hate defense lawyers. I can believe that. Defense lawyers look for clients with big bank accounts. Of course. And defense lawyers are an integral part of our legal system. Without a doubt.

Connelly has created a most interesting fictional character in Mickey Haller.

Tuesday, March 22, 2011

Sunday, March 20, 2011

Jane Leavy - The Last Boy (4)

Mickey Mantle blew out his knee in that 1951 World Series. This was before medicine could do anything about a torn ACL. It's amazing that Mantle performed at the level he did given the injuries and pain he had to overcome.

How did Mantle play with a torn ACL? He played two full seasons with a torn ACL, not having any kind of surgery until 1953. He bodily put together so that he was that rare athlete who could perform at this best despite such a devastating injury. He had amazing capacity to overcome pain. He has an extraordinary pain threshold.

As much as he was anything, Mickey Mantle was an athletic freak of nature.

Saturday, March 19, 2011

Jane Leavy - The Last Boy (3)

"In short, Mantle's strength was his weakness. He tore himself apart. This flawed medical knowledge would become an essential element of Mantle mythology. But it would not survive the test of time and medical scrutiny. Mantle wasn't brought down by the way he was built or his cavalier attitude toward off-season conditioning and rehabilitation. It was this simple: given the existing state of sports medicine, nothing could have presented the deterioration of his right knee (hurt in the 1951 World Series), short of another line of work."

Jane Leavey, The Last Boy, p. 107.

Tuesday, March 15, 2011

The Ahistorical Tea Party

The Ahistorical Tea Party
Jonathan Chait


Ed Glaeser urges the Tea party to return to its urban roots by adopting urban-centric policies:

The original Tea Party was a child of the city. Urban interactions in 1770s Boston helped create a revolution and a great nation.
The current Tea Party could return to its urban roots if it stands up against subsidies for home borrowing and highways and if it encourages competition in urban schools.
Aside from the obvious fact that this will never happen -- Tea Parties represent the self-interest and cultural assertion of a certain segment of America, not a principled libertarianism -- this gets the history wrong. Gordon Wood's review of Pauline Maier's account of the Constitution from last December provocatively, and persuasively, argues that the Tea Party descends from the anti-Federalists. The whole review essay is a must read, but here is the nub:

Many of the critics were localists who feared that the Constitution would create the very kind of far-removed and powerful central government that they had just thrown off. They worried also that elections were too infrequent, especially for the Senate. They thought that representation in the House, sixty-five members for four million people, was inadequate. The president was too monarch-like, and the Constitution lacked a provision for jury trials in civil cases. Some were frightened by the distant ten-mile square that was to become the capital. The federal government would become a consolidation run by an aristocracy, “lordly and high-minded men” contemptuous of the common people.

Like the present-day Tea Partiers, mistrust of politicians ran through all their speeches and writings. Since no one should be counted on to exercise political power fairly, many critics proposed term limits. In framing a new government, the Anti-Federalists declared, “it is our duty rather to indulge a jealousy of the human character, than an expectation of unprecedented perfection.” One Massachusetts delegate said that “he would not trust ‘a flock of Moseses’” with political power or with too much revenue. “As the poverty of individuals prevents luxury,” one Anti-Federalist observed, “so the poverty of publick bodies ... prevents tyranny.” Above all, the opponents wanted a bill of rights to limit the government and protect their rights.

Although there were men of wealth and education on both sides of the debate, there is little doubt that many of the opponents of the Constitution were plain middling men, farmers and small-time merchants, men such as the Scotch-Irish backcountry Pennsylvanian William Findley or the New York petty merchant Melancton Smith, who had no college education and resented the arrogance of the Federalists. Findley had his small victory in the Pennsylvania ratifying convention. When he claimed that Sweden, when it lost its jury trials, lost its freedom, the Federalists, in particular Thomas McKean, the state’s chief justice, and James Wilson, a lawyer and former student at the University of St. Andrews, mocked him and laughingly denied that Sweden had ever had jury trials. When the Pennsylvania convention re-assembled following the Sabbath, Findley produced evidence that there had indeed been jury trials in Sweden, citing especially the third volume of Blackstone’s Commentaries on the Laws of England, every lawyer’s bible. McKean had the good sense to remain quiet, but Wilson could not. “I do not pretend to remember everything I read,” he sneered, adding that “those whose stock of knowledge is limited to a few items may easily remember and refer to them; but many things may be overlooked and forgotten in a magazine of literature.” He ended by claiming to have forgotten more than Findley had ever learned. No wonder the opponents of the Constitution thought that Wilson conceived himself to be “born of a different race from the rest of the sons of men.”

Throughout the debates, many of the Anti-Federalists in many of the states expressed similar social resentments. The critics of the Constitution tried to speak out, but as one Connecticut Anti-Federalist complained, they were “‘browbeaten’ by the self-styled ‘Ciceros’ and men of ‘superior rank, as they called themselves.’” The opponents of the Constitution grumbled that the Federalists, “these lawyers, and men of learning, and monied men ... talk so finely and gloss over matters so smoothly, to make us poor illiterate people swallow down the pill.” They expected to go to Congress, to become the “managers of this Constitution,” and to “get all the power and all the money into their own hands.” Then they would “swallow up all us little folks, like the great Leviathan ... yes, just as the whale swallowed up Jonah.” What was needed in government, said Melancton Smith of New York, was “a sufficient number of the middling class” to offset and control the “few and great.”

Although Maier is too serious a historian—too wedded to the pastness of the past—to attempt to connect these sorts of localist attitudes and social resentments to our own time, what is extraordinary about much of the Anti-Federalist thinking is its similarity to the populist sentiments that we are experiencing today. All of which suggests that the present-day Tea Party movement may not be as novel and strange as some think it is. The great irony, of course, is that the Anti-Federalist ancestors of the Tea Partiers opposed the Constitution rather than revered it.

Sunday, March 13, 2011

Jane Leavy - The Last Boy (2)

Mickey Mantle was the ultimate sports hero of my time. No one came close to him in popular appeal. This biography by Jane Leavy will be definitive for our time.

No one came close to him even though Willie Mays was probably the better all-around player. My conclusion from what I've read recently is that Mantle was better with the bat in his hand, but Mays was the better all-around because he was certainly better in the field.

In his rookie season, in the World Series of 1951, Mantle stepped into a drain hole in Yankee Stadium going for a ball that Joe Dimaggio ultimately caught, a ball amazingly hit by Willie Mays, and blew out his knee. This injury kept Mantle from being an even greater player than he was.

Mantle ruined his life and that of his family with alcoholism. He acheived a kind of redemtpion during his last 18 months of sobriety. Yet finally his life can be reduced to one word: TRAGEDY. So sad for he might have done so much good to the country if he had not been drunk all of the time.

Saturday, March 12, 2011

About Bookstores

The end of bookstores.
Nicole KraussMarch 3, 2011 | 12:00 am

A few weeks ago, with a small footnote by way of introduction, The New York Times Book Review published revamped best-seller lists that, for the first time, separately reflect the sale of e-books. The new lists were inevitable—e-books made up about 10 percent of book sales in 2010, and that number is rapidly rising. You had to read between the lines to find the real news, but there it was: To the growing list of things that will be extinct in our children's world, we can now add bookstores. Does it surprise us? Should we care?

There were booksellers in ancient Greece and Rome and the medieval Islamic world, but it was not until after the advent of printing that the modern bookstore was born. In sixteenth-century England, a license from the king was needed to print a book, and those books considered distasteful by the monarchy were suppressed; to trade in such outlawed books was a punishable offense, and yet there were booksellers and readers willing to take the risk, and these books were sold and read. The bookseller, in other words, was, from the beginning, an innately independent figure, in spirit if not by law. As the availability and variety of printed books increased, the bookseller became a curator: one who selects, edits, and presents a collection that reflects his tastes. To walk into a modern-day bookstore is a little bit like studying a single photograph out of the infinite number of photographs that cold be taken of the world: It offers the reader a frame. Within that frame, she can decide what she likes and doesn't like, what is for her and not for her. She can browse, selecting this offering and rejecting that, and in this way she can begin to assemble a program of taste and self.

The Internet has co-opted the word “browse” for its own purposes, but it’s worth pointing out the difference between browsing in a virtual realm and browsing in the actual world. Depending on the terms entered, an Internet search engine will usually come up with hundreds, thousands, or millions of hits, which a person can then skate through, clicking when she sees something that most closely echoes her interest. It is a curious quality of the Internet that it can be composed of an unfathomable multitude and, at the same time, almost always deliver to the user the bits that feed her already-held interests and confirm her already-held beliefs. It points to a paradox that is, perhaps, one of the most critical of our time: To have access to everything may be to have nothing in particular.

After all, what good does this access do if we can only find our way back to ourselves, the same selves, the same interests, the same beliefs over and over? Is what we really want to be solidified, or changed? If solidified, then the Internet is well-designed for that need. But, if we wish to be changed, to be challenged and undone, then we need a means of placing ourselves in the path of an accident. For this reason, the plenitude may narrow the mind. Amazon may curate the world for you, but only by sifting through your interests and delivering back to you variations on your well-rehearsed themes: Yes, I do love Handke! Yes, I had been meaning to read that obscure play by Thomas Bernhard! A bookstore, by contrast, asks you to scan the shelves on your way to looking for the thing you had in mind. You go in meaning to buy Hemingway, but you end up with Homer instead. What you think you like or want is not always what you need. A bookstore search inspires serendipity and surprise.

It’s a revealing experiment to put side by side bookstores and the Internet—or even just Google Books, which now offers 15 million of the world’s 130 million unique books. Both the Internet and Google Books strive to assemble the known world. The bookstore, on the other hand, strives to be a microcosm of it, and not just any microcosm but one designed—according to the principles and tastes of a “gatekeeper”—to help us absorb and consider the world itself. That difference is everything. To browse online is to enter into a search that allows one to sail, according to an idiosyncratic route formed out of split-second impulses, across the surface of the world, sometimes stopping to randomly sample the surface, sometimes not. It is only an accelerated form of tourism. To browse in a bookstore, however, is to explore a highly selective and thoughtful collection of the world—thoughtful because hundreds of years of thinkers, writers, critics, teachers, and readers have established the worth of the choices. Their collective wisdom seems superior, for these purposes, to the Web’s “neutrality,” its know-nothing know-everythingness.



The other day I rented Hannah and Her Sisters after having not seen it for many years. I was struck by how many chance meetings and conversations took place in book or record stores. No one would ever turn to Woody Allen for a realistic portrayal of New York, but, all the same, it surprised me that back then there were so many book and record stores to film in. They used to say about New York that, if you get rid of the drug dealers, people will stop using. It makes one wonder if the phrase carries: Get rid of the bookstores, and people will stop reading. Recently, the largest independent bookstore in the country, Powell’s, laid off 31 employees. In a release, the company cited the changes that technology has forced on the book industry. And Borders—which was once a predator upon other, smaller bookshops—has filed for bankruptcy after seeing sales fall by double-digit percentages in 2008, 2009, and in each quarter in 2010.

There are many reasons for the decline of bookstores. Blame the business model of superstores, blame Amazon, blame the shrinking of leisure time, blame a digital age that offers so many bright, quick things, which have crippled our ability for sustained concentration. You can even blame writers, if you want, because you think they no longer produce anything vital to the culture or worth reading. Whatever the case, it is an historical fact that the decline of the bookstore and the rise of the Internet happened simultaneously; one model of the order and presentation of knowledge was toppled and superseded by another. For bookstores, e-books are only the nail in the coffin.

Or are they? Or rather, should we let them be? What, really, is the difference if we can still download all those books we once might have bought in a real-world store? Presumably, once bookstores are gone—and the professional literary critic (those widely and deeply read Edmund Wilsons, Alfred Kazins, and Susan Sontags who once told us not just what books to read but also taught us how to read), no longer considered valuable enough to seriously employ, is completely dead—other “content curators” will rise up in their place to build a bridge between the reader and the books that were meant for her. What’s so terrible about that?

But here we run into a strange misapprehension about digital culture and commerce. The accepted notion that the Internet, as an open-access forum, has disseminated power and influence and opened the door to seemingly endless variety is true in many instances, but not always. If creative copyright laws remain in place, most books will continue to be available only at a price. (If such laws don’t remain in place, most of the future’s great books will not get written.) Merchants will still control the sale of books, and, at least for the foreseeable future, those merchants will be a handful of corporations like Apple, whose “curatorial” decisions are based solely on profitability, their selections determined by the best-seller list, which is itself determined by corporations like Apple, so that the whole thing takes on the form of a snake eating itself.

Looking over these newly expanded and increasingly desperate best-seller lists—to hardcover (fiction and nonfiction), paperback (trade fiction, mass-market fiction, and nonfiction), and a number of other specialized lists (advice, how-to, and miscellaneous; children’s—further divided into picture books, chapter books, paperbacks, and series) have now been added a list of e-book sales, for a total of six whole pages of print—one can’t help but wonder why. Why do we care about best-sellers? Why does The New York Times Book Review, one of the last book-review sections of a national newspaper left in this country, dedicate six pages that might otherwise be given over to reflection on books to their commercial ranking instead?

If between the lines of those new best-seller lists is an obituary for bookstores, there is also one for The New York Times Book Review itself: Soon all that might be left of it is a bundle of best-seller lists. It is not the notion of a best-seller list that rankles: Commerce is a part of literary life, and the commercial distinction of a serious book—not everything that sells well is dross—lifts the spirits and the bottom lines of publishers and writers. But six pages of Dow Jones-like charts? Why this obsession with the money side, even while everyone agrees that salability has little relationship to quality? The independent spirit of the bookstore is, at its best, a much-needed bulwark against this obsession.

Yes, the technology is real, and, yes, e-books will exist—but why to the exclusion of books and bookstores? Is convenience really the highest American value? When you download an e-book, it is worth stopping to consider what you are choosing, why, and what your choice means. If enough people stop taking their business to bookstores, bookstores—all bookstores—will close. And that, in turn, will threaten a set of values that has been with us for as long as we have had books.

More Montaigne

Montaigne’s Moment
By ANTHONY GOTTLIEB
Published: March 10, 2011

Anyone who sets out to write an essay — for a school or college class, a magazine or even the book review section of a newspaper — owes something to Michel de Montaigne, though perhaps not much. Montaigne was a magistrate and landowner near Bordeaux who retired temporarily from public life in 1570 to spend more time with his library and to make a modest memento of his mind. He called his literary project “Essais,” meaning “attempts” or “trials,” and the term caught on in English after Francis Bacon, the British philosopher and statesman, used it for his own collection of short pieces in 1597.

Conversation Across Centuries With the Father of All Bloggers (December 18, 2010) Dr. Johnson’s dictionary defined an essay as “a loose sally of the mind; an irregular indigested piece.” Bacon’s compositions tend to drive at a single conclusion, but Johnson’s “sally” is a nice fit for Montaigne’s meandering collection of thoughts, and those of his more whimsical descendants. Only a very brave or foolish exam candidate today would try to copy Montaigne instead of Bacon. The art of digression reached breathless heights in Samuel Butler’s 1890 essay “Ramblings in Cheapside,” which traverses turtle shells, the relation of eater to eaten, likenesses between common tradesmen and famous portraits, the worthlessness of most classical literature, the politics of parrots and the practical wisdom of slugs. Ramblings indeed. Where was I? Oh, yes: Montaigne.

Oddly, Montaigne learned to speak Latin before he learned to speak anything else, thanks to his father’s strict ideas about schooling. But he chose to write in French, which he expected would change beyond recognition within 50 years, rather than a more “durable” tongue. This is because the book was intended only “for a few men and for a few years.” Well, that plan backfired. Not only is “Essais” still in print, in many languages, more than 400 years later, it is also now extolled as a source of wisdom for the contemporary world — or at least the English-speaking part of it. (The French may have had enough of him.) Last year, to great acclaim, Sarah Bakewell, a British biographer and archivist, published HOW TO LIVE; Or, A Life of Montaigne in One Question and Twenty Attempts at an Answer (Other Press, $25). And this year we already have two new books covering similar ground: WHEN I AM PLAYING WITH MY CAT, HOW DO I KNOW THAT SHE IS NOT PLAYING WITH ME? Montaigne and Being in Touch With Life (Pantheon, $26), by Saul Frampton, a British lecturer; and WHAT DO I KNOW? What Montaigne Might Have Made of the Modern World (Beautiful Books, £14.99), by Paul Kent, a British radio producer.

It’s been said — by Bakewell, with reservations, and others — that Montaigne was the first blogger. His favorite subject, as he often remarked, was himself (“I would rather be an expert on me than on Cicero”), and he meant to leave nothing out (“I am loath even to have thoughts which I cannot publish”). Some of his critics accused him of, in effect, oversharing, in the manner of a narcissistic Facebook status update. One was appalled that he should think it worthwhile to tell his readers which sort of wine he preferred. Montaigne also happened to mention that his penis was small. Two 17th-century theologians who were instrumental in getting his “Essais” placed on the Vatican’s index of prohibited books, where it stayed from 1676 to 1854, accused him of “a ridiculous vanity” and of showing too little shame for his vices.

In the eyes of Rousseau, Montaigne had shared too little, not too much: he was not truthful about himself. But this charge reflects the fact that Rousseau was unwilling to allow that there had been any accurate self-portraits in words before his own “Confessions.” It was Montaigne, though, who was the real pioneer. The famous autobiographies of late antiquity and the Middle Ages — St. Augustine’s “Confessions” and Abelard’s “History of My Misfortunes” — bared all in order to help other sinners save their souls; unlike Montaigne’s “Essais,” they were professedly intended for sober religious purposes. And Renaissance autobiographies, like those of the artist Benvenuto Cellini or the mathematician and gambler Girolamo Cardano, tended to be published only posthumously.

Somewhat like a link-infested blog post, Montaigne’s writing is dripping with quotations, and can sometimes read almost as an anthology. His “links” are mainly classical, most often to Plato, Cicero and Seneca. Modern readers may find all these insertions distracting — there is, as it were, too much to click on — but some may be thankful for a fragmentary yet mostly reliable classical education on the cheap. (Montaigne should not, however, have credited Aristotle with the maxim, “A man . . . should touch his wife prudently and soberly, lest if he caresses her too lasciviously the pleasure should transport her outside the bounds of reason.” The real source of this unromantic advice is unknown.)

Bakewell, Frampton and Kent all stress that the distinctive mark of Montaigne is his intellectual humility. Like Socrates, Montaigne claims that what he knows best is the fact that he does not know anything much. To undermine common beliefs and attitudes, Montaigne draws on tales of other times and places, on his own observations and on a barrage of arguments in the ancient Pyrrhonian skeptical tradition, which encouraged the suspension of judgment as a middle way between dogmatic assertion and equally dogmatic denial. Montaigne does often state his considered view, but rarely without suggesting, explicitly or otherwise, that maybe he is wrong. In this regard, his writing is far removed from that of the most popular bloggers and columnists, who are usually sure that they are right.

For Bakewell, it is Montaigne’s sense of moderation in politics and his caution in judgment from which the 21st century has most to learn. For Frampton, whose style is more academic than Bakewell’s (and not always in a good way), one of Montaigne’s most valuable insights is that self-knowledge is connected with the knowledge of others, and that empathy is the heart of morality. Kent concludes that the wisdom of Montaigne is a wisdom for Everyman, and that the “Essais” are a tool for thinking that anyone may use.

Maybe it is in pursuit of such egalitarianism that Kent’s prose works tirelessly to evoke the beery eructations of a British lout. To put such a slurry of slang, cliché and swearing in print comes across as artifice. Littered among the arch spellings, mangled names and frail grammar are slapdash attempts at iconoclasm: Proust is “onanistic tosh,” both jazz and opera are ridiculous. At a stretch, one can perhaps see Kent’s curious experiment as being in the candid spirit of Montaigne. But this contemporary version of uninhibited writing shows the limitations of the genre rather than its potential.

Montaigne can evidently still evince strong affection from authors after nearly half a millennium. So artful is Bakewell’s account of him that even skeptical readers may well come to share her admiration. But it’s not so clear that Montaigne’s often chaotic essays are all that digestible today unless one has a good guide to his life and context, like Bakewell’s or Frampton’s, close to hand. At the end of the “Essais,” Montaigne complained that “there are more books on books than on any other subject: all we do is gloss each other.” One wonders what he would make of his own inadvertent contribution to this state of affairs.

Thursday, March 10, 2011

On "The Age of Fracture"

by Alan Wolfe

March 10, 2011
The Age of Fracture
by Daniel T. Rodgers
Belknap Press, 346 pp., $29.95

I LIVE IN A DIFFERENT country than the one into which I was born in 1942. I have never been quite able to pinpoint exactly what makes it so different. More than any other book I’ve read in recent years, Age of Fracture, by the Princeton historian Daniel T. Rodgers, has helped me to discover and to understand that difference.

One explanation for what happened holds that in the intervening years—the second half of the twentieth century—the United States shifted from the big government liberalism of the Democrats to the laissez-faire nostrums of the Republicans. There is an obvious truth to such a view, but the problem with this account, which fits so nicely into Arthur Schlesinger, Jr.’s cyclical interpretation of American experience, is that it views the changes of the past half-century as the latest replay of long-established patterns, and therefore fails to grasp just how radical some of those changes have been.

Rather than a shift from left to right, we have witnessed, according to Rodgers, a transformation from big to small. The intellectuals who shaped America’s understanding of itself fifty years ago—Schlesinger himself, and David Riesman, C. Wright Mills, and the late Daniel Bell—were, for all their faults, familiar with history, preoccupied with power, and appreciative of complexity. They also influenced political rhetoric: Eisenhower’s warning about the military-industrial complex as well as Kennedy’s 1962 Yale address on the end of ideology seemed to come directly from their books. No wonder that the leaders who followed them were so ambitious: whether it involved deploying American military might abroad or sponsoring social reform at home, presidents as politically distinct as Johnson and Nixon were hardly shy about using the power of the state to achieve their ends.

All this changed over the course of subsequent decades, in large part because thinkers stopped thinking big. Economics is exemplary. It was not so much that Keynes lost ground to Hayek—both, after all, were European idea men shaped by the events of their tragic century. It was instead that the micro usurped the macro. The key figure in this regard is a relatively obscure University of Chicago law professor named Ronald Coase, who in 1960 urged judges not to focus on abstract questions of justice but to decide cases based upon overall economic benefit. “As economics emerged from the disciplinary crisis of the 1970s and early 1980s,” Rodgers writes, “its focus was no longer on systemwide stabilization or the interplay of aggregates. Economics was about the complex play of optimizing behavior—a thought experiment that began with individuals and the exchanges they made.” There was no longer such a thing as society, as Margaret Thatcher both informed the British and lectured the Americans, at least as far as those building economic models were concerned.

No president better illustrates the shift to small thinking than Ronald Reagan, and not just because his economic policies so strongly favored the market. Early in his presidency, Reagan seemed attracted by big ideas; his “evil empire” speech in 1983 quoted Whittaker Chambers and could have been written by Arthur Koestler. By his second term, however, darkness at noon had become sunshine 24/7. “Twilight? Twilight?,” Reagan said in 1988. “Not in America … That’s not possible … Here it’s a sunrise every day.” Cold War Reagan had become have-a-nice-day Reagan, self-actualizing Reagan, citing Tom Paine, calming the anxious, avoiding any hint of Cotton Mather-type scolding. When George W. Bush managed to link his call for a war against terror with the reassurance that no sacrifices would be necessary to fight it, he had precedent aplenty in the speeches of a president who saw “no need for overcoming, no manacles to be broken, no trial to be endured, no pause in the face of higher law.”

Left-wing thinkers were attracted to “small is beautiful” as well. Historians focused on the conditions of everyday life, eventually leaving the stuff of high statecraft behind. What the anthropologist Clifford Geertz called “thick description” yielded fascinating insights into matters local and particular, but repudiated any hint of grand theorizing about the universal and the predictable. The influence of Foucault could be felt everywhere, and while he began his career by writing about power, by the time of his death “power had seeped out of sights and structures until it was everywhere.” Identity politics made its own contribution: by calling attention to the myriad of groups that composed society, the practitioners lost sight of the society these groups composed. Microeconomics may have reduced all action to the choices of individuals, but left-wing social theory could not even find individuals in the first place. “It is clearly not the case,” declared the rhetorician and theorist Judith Butler, “that ‘I’ preside over the positions that have constituted me, shuffling through them instrumentally, casting some aside, incorporating others, although some of my activities may take that form. The ‘I’ who would select between them is always already constituted by them.” Clearly.

One did not have to travel to the far reaches of the leftist imagination to find a rejection of the once popular notion of “We, the People” similar to the right’s retreat from national purpose. Although a preference for Burke’s “little platoons of society” found support among conservative thinkers and was expressed in George H. W. Bush’s “thousands of points of light,” liberals also came to appreciate the advantages of civil society. (Our current president was once a community organizer). When it first appeared in 1971, John Rawls’s A Theory of Justice, thick with economic reasoning, seemed thin on liberal substance—but at least Rawls imagined society as a national community and could rightly be interpreted as a defender of the welfare state. By the time of the Carter, Clinton, and Obama presidencies, liberal Democratic presidents, quick to pick up on communitarian themes, were more likely to warn of the dangers of big government than to rely on government to promote even minimal equality. Liberalism was entering its minimalist phase. If, by society, we mean a national community tied together through obligations guaranteed by the state, there was no such thing as society on this end of the political spectrum either.

When it comes to politics, small—I am afraid to say—is ugly, and small is now everywhere around us. On the right, one former advocate of national greatness now champions Sarah Palin, while another explores brain research. On the left, Democratic activists, anxious to defend whatever is left of America’s labor movement, would react in astonishment if informed that Wright Mills, a leftist from another era entirely, denounced the labor leaders of his day as “new men of power.” Fifty years ago, John Kenneth Galbraith was a center-left economist anxious to serve the Kennedy administration; today his ideas seem hopelessly utopian, the stuff of fourth or fifth parties. It is no wonder that stalemate and polarization grip Washington these days. When there is nothing much to argue over, arguments tend to get that much more vicious.

Rodgers is not the first to explore this ground or this theme. The historian David T. Courtwright’s No Right Turn: Conservative Politics in a Liberal America is a marvelously idiosyncratic romp through similar material, and Todd Gitlin’s The Twilight of Common Dreams got there first on the question of the left’s withdrawal from a national project. But Rodgers is unsurpassed in other important matters. His ability to explain complex ideas—the Coase theorem comes to mind—is exemplary. He is unapologetic about treating intellectuals, and even academics, as producers of ideas worth taking seriously. He has the ability, unusual for historians of our day, to engage directly in current debates and to write with the clarity of a future observer of these same events. Intellectual history is never that easy to do. An intellectual history of our own time is even harder to pull off. Rodgers has done it and done it well.

Perhaps, then, this book will have the happy effect of bringing to an end the trends it brings to light. Rodgers writes about our descent into thinking small because he wants us to once again think big—or so I read between his lines. If more thinkers wrote books like this, the country in which I live may once again resemble the one in which I was born. How sweet that would be.

Alan Wolfe is writing a book about political evil.

Tuesday, March 8, 2011

Jane Leavy - The Last Boy

I need to catch up on the books I've read this year so far, but in the meantime I want to say I am currently reading this fabulous new biography of Mickey Mantle. The Mick was the great baseball hero of my time--the 60's. He stood above all others. When Mantle came to bat, there was always expectation. He might hit the ball a mile or he might strike out. It was exciting either way. Mantle played from 1951 to 1969.

Monday, March 7, 2011

A Culture of Ignorance

Paul Stoller/Professor of Anthropology, West Chester University; Author, The Power of the Between
Posted: March 5, 2011
Politics in a Culture of Ignorance


During the past few weeks, the play of American politics has been particularly disturbing. Consider the willful ignorance of former Arkansas Governor Mike Huckabee, trying to convince his supporters that President Obama is "not one of us." To that end, he suggested that President Obama's worldview was shaped by his childhood in Kenya -- or maybe it was, Indonesia -- and by radical movements like the Kenyan Mau-Mau revolt. Huckabee, a potential Republican candidate for president, went on to say that President Obama's father and grandfather molded his "foreign" ideas about how the world works. It doesn't matter that President Obama hardly knew his father or his paternal grandfather, or that the Mau-Mau rebellion took place far from the Obama homestead in Kenya, a country President Obama first visited when he was 26 years old. Governor Huckabee also failed to mention the "inconvenient truths" that President Obama was raised by his mother and his maternal grandparents who grew up in Kansas or that President Obama's maternal grandfather fought with Patton in Europe during World War II.

Think about the countless numbers of elected officials, Republicans all, who say that "we" are "broke," a rather bombastic overstatement, because of greedy public employees. Due to the "lazy" greed of these middle-income public servants, the argument goes, we need to abolish collective bargaining and eviscerate budgets for education, the arts, the environment and even law-enforcement. What else can you do when it is sin to either raise taxes or scale back corporate tax breaks? What's more, there is no room for negotiation on these matters, which means that there is no space for conceptual nuance, and little or no willingness for a civil exchange of ideas that might result in compromise -- the foundation of the American political system.

Looking at these developments from a more or less rational standpoint, none of it makes much sense. How can any reasonably intelligent person, you might ask yourself, accept the big lie that many conservative Republicans have long touted: that the simple formula of lower taxes and limited government will somehow solve all of the complex economic and social problems in an globally integrated world? And yet that is the pabulum that a whole host of Republican presidential hopefuls offer again and again to their base, and, through media coverage, to the rest of us. If you repeat the big lie often enough, some people -- many people, in fact -- begin to believe it.

Are contemporary American politics being played out in a culture of ignorance? What does it say about contemporary political culture when there is political support for uncompromising public figures who seem more interested in unrealistic ideological purity than governing their polities? How else can you explain the political support and media attention we give to politicians like Sarah Palin or Michele Bachmann or Mike Huckabee? Even though they unflaggingly demonstrate an acute intellectual incompetence as well as wholesale ignorance of American history and world affairs, they still manage to maintain or even increase their legions of followers. Is there no political price to pay for incompetence or ignorance?

It is no easy task to try to explain this descent into a culture of ignorance. Some the descent may be rooted in our under-funded and unfairly maligned system of public education. As a professor at a public university I have first hand knowledge of the processes that give rise to a culture of ignorance. Although the intelligence, curiosity and grit of some of my students, many of whom are the first people in their families to attend college, thoroughly inspires me, I am often shocked and disappointed by general student ignorance of culture, geography, history, and politics -- at home and abroad. Even more disturbing is what seems to be a lack of student curiosity about a world that has been rendered more complex through globalization. Many of my students are not interested in learning about foreign societies. They take my introductory cultural anthropology course because it is a requirement. In addition, some of my students seek the most expedient path toward graduation -- one that involves the least amount of work and difficulty for the greatest return. The upshot is that many students leave the university unprepared to compete in the global economy. Many of them have trouble thinking critically. Others find doing any kind of research to be profoundly challenging. Some write essays that border on the incoherent. More troubling still is that that this downward spiral toward incompetence, according to the findings of Richard Arum and Josipa Roksa's new book, Academically Adrift: Limited Learning on College Campuses, seems to be widespread among our college and university students.

If this picture reflects the intellectual state of our college students, what can we say about the capacity of the general public to evaluate critically a complex set of information? The only way to reverse this slide into mediocrity, which is reflected in both the intellectual quality of contemporary politics and the distressing climate of our educational institutions, is to make serious investments in education and the public sector in order to give to our underpaid and under-appreciated teachers and civil servants the support and respect they deserve. To do otherwise is to risk sinking even deeper into the swamp.

Freedom and Security

Why U.S. Politics Is a Deadlocked Mess
It's because of the rhetorical battle over 'freedom' and 'security.'
by Michael Kazin


The Real Reason Liberals Can’t Mobilize Religious Voters Freedom and security—for some 80 years, they have been the most cherished words in American political discourse. Each major party claims it can best shield the citizenry from danger while also protecting its liberties. Which side in this perennial contest makes the most persuasive case always ends up in power. And one reason national politics is a deadlocked, angry mess right now is that neither side has the rhetorical advantage.

First, a look back. The contest began during the Great Depression. Before that epic slump, “security” was a promise made more frequently by insurance brokers than by politicians. But, under Franklin Roosevelt, liberal Democrats used Social Security as the template for all of its most popular programs. The Wagner Act protected workers who wanted to have unions, and new federal jobs provided a measure of security for millions who would otherwise have been unemployed. The Liberty League, the New Deal’s most prominent foe, claimed all these policies were unconstitutional, socialist-inspired assaults on individual “freedom.” But, knowing the League was bankrolled by DuPont and other anti-union businesses, most Americans dismissed it as a thinly disguised corporate lobby. In contrast, FDR defined “freedom” as every individual’s ability to enjoy the good life, with the aid of a government that looked out for the common good.

World War II and the early cold war extended the liberals’ hegemony. They were able to define security as a military alliance against, first, the fascist right and, then, the communist left—forces that threatened the political freedoms of every American. The minimum wage, health benefits won by strong unions, and steady economic growth all reinforced the notion that security at home was best left in the hands of those who had rescued the nation from Hooverism. The “loss” of China and stalemate in the Korean war briefly allowed conservatives to put liberals on the defensive. But, tellingly, they did so on grounds of insecurity, not freedom, and their moment of triumph was brief.

Then came the 1960s. Under pressure from the grassroots left—movements of blacks, Chicanos, women, and gays—liberals rediscovered their nineteenth-century heritage as tribunes of individual liberty. The quarter-century of economic expansion that followed World War II had given Americans unprecedented freedom in making choices about their lives. “Not With My Life You Don’t,” an anti-draft slogan coined by Students for a Democratic Society, the largest organization on the new, white left, quickly became a sentiment embraced by adherents to an ever-expanding variety of causes. The sum total of their efforts transformed liberalism into an ideology that prized the emancipated self over a secure, if culturally repressed, mass society.

As a result, in the 1970s, when stagflation and a surge in violent crime occurred simultaneously, Democrats had no credible way to explain how they would keep the nation secure. Conservatives quickly seized their opportunity. Alarmed about abortion, feminism, and gay rights, the Christian right redefined security as the preservation of heterosexual marriage with the husband on top. Neocons, ironically, gave it a more traditional meaning when they warned about Soviet gains in both the arms race and the developing world. Meanwhile, the laissez-faire gospel that had failed to sway Americans in the 1930s was revived as a populist call to arms. The contest was between common-sense advocates of the free market and elites who foisted what Ronald Reagan called, in 1981, “excessive government intervention in their lives and in the economy” through “a punitive tax policy that does take ‘from the mouth of labor the bread it has earned.’” Reagan brilliantly amalgamated these different, often contradictory, appeals to security and freedom.

Ever since, the right has been using Reagan’s synthesis, updated as needed, to make liberals seem either out of touch or without principles they are willing to defend. But it has been an incomplete triumph. The end of the cold war removed a major cause of global insecurity; neither Islamist terrorism nor China’s industrial might has provided a substitute as alarming or, at least so far, as durable. At home, anxiety about the economy, both present and future, has helped doom every conservative proposal to convert either Social Security—or Medicare, enacted in 1965 as a set of amendments to the original New Deal program—into a plaything of the free market. Meanwhile, the libertarian ethos benefits gay couples and dope smokers, as well as union-busters.

This conflict over who can lay claim to security and freedom is not and has never been between advocates of “big government” and those who favor a smaller one. Starting in the 1930s, whatever party has held national power has extended federal dollars to the groups and causes it favors, and the budget has only gone up. But, today, our politics is trapped between two competing notions of freedom and security, neither of which seems capable of defeating the other. Liberals demand respect for individual autonomy of speech, identity, and religion, while defending the protections that unions and the federal state give to workers and consumers. Conservatives, meanwhile, see private property as the basis of freedom and view military security as the only necessary kind.

Barack Obama, cautious pragmatist that he is, seeks to muddle the differences. He hails the landmark health care law more for cutting costs than because it should give secure coverage for nearly all Americans. And he offers only grudging, pro forma backing to collective bargaining at a time when the very existence of effective unions is in question, and opinion overwhelmingly supports their rights. Of course, after the election debacle of last November, such stands have a short-run logic. But they avoid the task of articulating a brave new synthesis of security and freedom that can speak to Americans who wonder what Democrats really think the government should do and why.

As Obama and his fellow partisans wonder how to break through in policy debates and the upcoming campaign, they might read or re-read how the architect of their modern party combined the ideals of freedom and security into a persuasive whole. FDR told the nation early in 1941:

In the future days, which we seek to make secure, we look forward to a world founded upon four essential human freedoms.
The first is freedom of speech and expression—everywhere in the world.
The second is freedom of every person to worship God in his own way—everywhere in the world.
The third is freedom from want, which, translated into world terms, means economic understandings which will secure to every nation a healthy peacetime life for its inhabitants—everywhere in the world.
The fourth is freedom from fear, which, translated into world terms, means a world-wide reduction of armaments to such a point and in such a thorough fashion that no nation will be in a position to commit an act of physical aggression against any neighbor—anywhere in the world.
The world has, of course, changed in countless ways since FDR’s time as president. But our problems—economic, international, environmental, demographic—are no more daunting than those which Roosevelt faced. To “win the future,” FDR’s partisan descendants still have to explain how security and freedom require each other—and to make that combination sound not just sensible, but inspiring.

Michael Kazin is a professor of history at Georgetown University and co-editor of Dissent. His next book, American Dreamers: How the Left Changed a Nation, will be published in August (Knopf).

Sunday, March 6, 2011

Moonwalking with Einstein

Nonfiction Sunday, Mar 6, 2011 13:01 ET
"Moonwalking With Einstein": How to remember everything
Can memory actually be taught? Why do dirty images help? Joshua Foer explains how to stop forgetting
By Claire Lambrecht

Think remembering birthdays is difficult? Try memorizing two decks of cards in five minutes or less. That is what Joshua Foer did to win the 2006 U.S. Memory Championship. Such intellectual exploits might not be surprising given Foer's family tree; he is the younger brother of celebrated author Jonathan Safran Foer and New Republic editor Franklin Foer. What is surprising is that he is willing to unmask the illusion and make us privy to the tricks of the trade.

"Moonwalking With Einstein" does just that: It takes the reader on Foer's journey from memory novice to national champion. Foer talks with people from both spectrums of the memory divide -- from Kim Peek, the inspiration for the 1988 movie "Rain Man," to the guy dubbed "The Most Forgetful Man in the World" -- and their conversations offer insight into the relevance of memory in a society increasingly dominated by smart phones, Google and Wikipedia. As Foer delved into the science and research, what he found surprised him. Contrary to popular belief, memory is not a matter of smart or stupid. Instead, it is more like golf, foosball or Ms. Pac-Man: a matter of technique and practice.

Salon spoke with Foer on the phone about technology memory, and why there might be hope for us addicts yet.

You talk about Gutenberg and how the printing press made memory less relevant. Today, we have Wikipedia and Google and all of these smart phones. So why is memory important?

You could argue that we are nothing more and nothing less than what we remember. Memory is not just this storage vault that we dip into when we need to recall something. It's actually intimately involved in shaping how we move through the world and process the world.

Today when we want to find information, people generally say, "Google it." What has it done to us, as Americans, if we just look everything up on the Internet?

I think that there is an argument to be made that while it is incredibly useful to have all of this information at our fingertips, externalized, we don't have that information, facts, knocking around in our skulls. We need that not only to create new ideas, but to think about things, to make new connections, and make sense of the world. There is great benefit to relying on Google to answer every question, and knowing that you don't have to remember all of this stuff, but I think there is probably a cost also.

The book discusses the relationship between memory and humanity. Can you talk a little bit about that?

One of the really interesting people I had the opportunity to meet in the course of researching this book was a guy named EP -- or at least that is what he is referred to as in the scientific literature. He has, or had -- he unfortunately passed away since I met him -- one of the worst memories in the world. It was fascinating to spend time with somebody who was otherwise a completely functional, normal human being, except that he lacked a short-term memory. It was a window into the extent to which memory makes us who we are. Being without it, he basically lived entirely in the present. He was like a pathological Buddha. He could not ponder the future, but he also couldn't dwell on the past. He was completely and entirely in the present.

What are some of the misconceptions Americans might have about memory? You talk a little bit about educational policies and rote memorization and those sorts of things. What are we doing wrong?

I'm not sure we're doing anything wrong. One of the things I write about is how the idea of memory is almost a bad word in education. For good reason our education system has swung away from emphasizing raw facts to try to create people who are abstract thinkers and creative thinkers, but memory has been denigrated in the process. My opinion would be that the pendulum has swung a little too far in that direction. I'm not an education expert, but my impression is that the pendulum is always swinging on a whole host of things in education.

One misconception is one I had when I started this whole journey. Part of this whole enterprise of trying to train my memory was the idea that memorization was boring, and rote, and the opposite of anything that you might call interesting. But it turns out that the kinds of memory techniques used by "mental athletes" in contests to remember huge amounts of information -- the essence of them -- is about creating: figuring out ways to make otherwise uninteresting information interesting and colorful and meaningful and attention-grabbing. And that's actually kind of fun. I ended up getting into that. That is probably the reason I kept going with this. I discovered that, though you wouldn't think so, memorizing things was actually fun.

You talk about memorizing two decks of cards in five minutes for the Memory Championship, and how weird and even dirty images help an image stick. Could you explain how this works?

The idea with that particular technique is to associate every card with an image of a person doing something to an object, and the lewder and stranger the better. So, for me, when I was training my memory, the king of diamonds was Bill Clinton and the queen of diamonds was Hillary Clinton. I was constantly putting those two people, who I also respect and revere, in very, very strange positions and sometimes quite explicit positions for the sake of remembering those cards.

You also talk about "Memory Palaces." Could you explain what those are and what you use them for?

The basic idea of the Memory Palace, which dates back to Ancient Greece, is that if one creates an imagined building in the mind's eye, and populates that building with images that you want to remember, those memories are much stickier. It sounds strange. It sound like it shouldn't work, and that was kind of what I thought when I went into all of this, but it turns out that it is actually an incredibly effective way of remembering stuff. It's actually how Cicero remembered his speeches, how medieval scholars memorized entire texts. It's one of those strange things that tap into some of the innate strengths of our memory. If you, for example, were to go visit some house that you'd never been in before, took a walk around for a few minutes, you would leave that house with a pretty good blueprint in your mind of where each room was, where the refrigerator was, where the bathroom was, maybe what kind of carpeting there was, what color the walls were painted; that's actually a huge amount of information that you walked out of that house with. The idea behind the Memory Palace, this idea that was supposedly, according to legend, discovered by this Greek poet Simonides 2,500 years ago, was that he made spacial information actually meaningful. You would walk out of that building not with a blueprint of furniture but with new knowledge that you'd committed to memory.

It seems like a lot of people have this conception that having a great memory is something that is innate, that you are born with the ability to memorize a deck of cards. Would you say that your research supports that or contradicts that belief?

This gets to the misconception that I went into this research with. There is undoubtedly some degree of natural variation in people's innate memory abilities, but I don't think it's that great. I know that what these individuals who compete in these contests can do, and what I learned to do myself, is entirely a function of technique and training.

So you would argue that the average guy or gal off the street could perhaps reach your level of success with enough training?

Yes, I'm convinced of it.