Thursday, May 31, 2012

The E.J. Dionne Book

I have finished the book and hope to gather my comment for a post.  I say that this is a much needed look at our political situation.

Wednesday, May 30, 2012

Dylan Receives Award



Bob Dylan among 13 Medal of Freedom recipients

Published: Tuesday, May 29, 2012, 8:46 PM     Updated: Tuesday, May 29, 2012, 8:46 PM
The Associated Press
Barack Obama, Bob DylanView full sizePresident Barack Obama presents rock legend Bob Dylan with a Medal of Freedom, Tuesday, May 29, 2012, during a ceremony at the White House in Washington. (AP Photo/Charles Dharapak)
WASHINGTON — Sketching impressive contributions to society in intensely personal terms, President Barack Obama presented the Medal of Freedom to more than a dozen political and cultural greats Tuesday, including rocker Bob Dylan, astronaut John Glenn and novelist Toni Morrison.
In awarding the nation's highest civilian honor to 13 recipients, living and dead, the president took note of the overflow crowd in the East Room and said it was "a testament to how cool this group is. Everybody wanted to check 'em out."
Obama then spoke of his personal connection to a number of this year's recipients, calling them "my heroes individually."
"I know how they impacted my life," the president said. He recalled reading Morrison's "Song of Solomon" in his youth and "not just trying to figure out how to write, but also how to be and how to think."
In college days, Obama said, he listened to Dylan and recalled "my world opening up, because he captured something about this country that was so vital." Dylan's appearance drew the biggest whoops from the crowd, and he dressed for the event — sunglasses, bow tie and black suit embellished with shiny buckles and buttons.
Obama also recalled reading about union pathbreaker Dolores Huerta when he was starting out as a community organizer.
"Everybody on this stage has marked my life in profound ways," he said.
Obama added that Pat Summitt, who led the University of Tennessee women's basketball team to more NCAA Final Four appearances than any other team, had helped pave the way for his two daughters, "who are tall and gifted."
"They're standing up straight and diving after loose balls and feeling confident and strong," he said. "I understand that the impact that these people have had extends beyond me. It will continue for generations to come."
The Medal of Freedom is presented to people who have made meritorious contributions to the national interests of the United States, to world peace or to other significant endeavors.
Other honorees:
—Former Secretary of State Madeleine Albright, the first woman to hold the job.
—John Paul Stevens, former Supreme Court justice.
—Juliette Gordon Low, founder of the Girl Scouts, who died in 1927.
—Shimon Peres, president of Israel, who is to receive his medal at a White House dinner next month.
—John Doar, who handled civil rights cases as assistant attorney general in the 1960s.
—William Foege, former director of the Centers for Disease Control and Prevention, who helped lead the effort to eradicate smallpox.
—Gordon Hirabayashi, who fought the internment of Japanese-Americans during World War II. He died in January.
—Jan Karski, a resistance fighter against the Nazi occupation of Poland during World War II. He died in 2000.
Huerta co-founded the National Farmworkers Association, which later became the United Farm Workers of America. Glenn was the first American to orbit the earth. Dylan's vast catalog of songs includes such rock classics as "Like a Rolling Stone," ''Blowin' in the Wind" and "Mr. Tambourine Man."
<br>

Related topics: Barack Obama, Bob Dylan

Tuesday, May 29, 2012

Judged by Your Language

May 29, 2012

Inescapably, You’re Judged By Your Language

language-wars.jpg
From the first time we step into an English class, we’re told that the rules matter, that they must be followed, that we must know when it’s appropriate to use a comma and what it means to employ the subjunctive mood. But do these things really matter? Outside of the classroom, what difference does it make if we write “who” instead of “whom” or say “good” instead of “well”?

It does make a difference, at least sometimes. In order to determine when those times are, the question must be asked: For whom are you writing? Take that last sentence, for example. As Joan Acocella wrote recently in The New Yorker, “Every statement is subjective, partial, full of biases and secret messages.” The above sentence is no exception. Its ostentatious structure and secret message says, “I am one of you.” It also says even sneakier things like “I’m educated, an authority,” and “You can trust me about language usage.” The average New Yorker reader recognizes the effort the sentence exerts to maintain grammatical correctness, and in recognizing this, the reader bonds with the writer. “I” becomes “we.” We share a secret now. We’re a team.

But how different would things be if I walked into the sports bar down the street on a Sunday afternoon and asked, “For whom are we rooting today?” The wording would not be likely to win me many pals at the pub. The most likely response from the collective would be banishment to a far corner, a shake of the head, and an astonished, “What’d ya mean, who’re we rootin’ for?”

Why did it go so wrong? In short, different audience, different dialect. The key to linguistic acceptance is recognition and adaptation. Know thy audience, know thy friends. It’s not a matter of which sentence is “correct”—“for whom are we rooting” versus “who are we rooting for”—so much as which sentence is correct for the given situation.

All of the complex linguistic theories of language acquisition and whether grammar is universally hardwired or learned through practice don’t matter one bit in practical everyday living. If “correct” is only a matter of situation, then what we should really be asking is why we need to be able to use both versions of the sentence. Why should we bother to learn prescriptive English—the grade-school rules—if it isn’t our natural dialect?

Repugnant as it may be, the simple answer is that we need to learn prescriptive English because that’s the way the people in power communicate. As far as daily survival is concerned, it doesn’t matter whether the origins of this linguistic power structure are racist, classist, or élitist, or whether they’re based on the whims of dead white males. This is how the system works right now, today, and in order to best get the attention of those in power, to begin to effect change, we must be able to use their dialect. We must know their rules.
People who say otherwise, who say that in all situations we should speak and write however we’d like, are ignoring the current reality. This group, known as descriptivists, may be fighting for noble ideas, for things like the levelling of élitism and the smoothing of social class, but they are neglecting the real-world costs of those ideas, neglecting the flesh-and-blood humans who are denied a job or education because, as wrong as it is, they are being harshly judged for how they speak and write today.

Furthermore, as David Foster Wallace points out in his essay “Authority and American Usage,” it’s not at all clear that “society’s mode of expression is productive of its attitudes rather than a product of those attitudes.” In other words, Wallace continues, it’s bizarre to believe that “America ceases to be elitist or unfair simply because Americans stop using certain vocabulary that is historically associated with elitism and unfairness.”

This is not even to mention the descriptivists’ dirty little secret. When it comes time for them to write their books and articles and give their speeches about the evil, élitist, racist, wrongheadedness of forcing the “rules” on the masses, they always do so in flawless, prescriptive English. Ensconced behind a mask of noble ends, something obscenely disingenuous is happening here. How easy it is for a person who is already part of the linguistic élite to tell others who are not that they don’t need to be. Or, as Joan Acocella puts it, the descriptivists will “take the Rolls. You can walk, though.”

These do-as-you-please linguists imagine themselves to be fighting for the common man, but they don’t practice what they preach. Playing the game and being able to deploy the rules has afforded them the luxury of a good education, a steady job, and decent income. It has allowed them to have their noble voices heard in the fight for linguistic equality. But they fight from a good, safe distance. They’re not on the front lines, naked and exposed.

For the individual looking for a higher education or trying to secure a decent job, what seems more humane: Admitting that, ugly, élitist, and unfair as it is, prescriptivism is currently the dialect of power and being able to manipulate that dialect can help you get ahead, or pretending that utopia is at hand, that everyone is a revolutionary, that linguistic anarchy will set you free? The choice to use our natural dialects whenever and wherever we please, to live in a world free of language-based racism and classism, may indeed be a worthy end for which to strive, but it’s also worth remembering that individuals don’t live in the end. They live now.


Read more http://www.newyorker.com/online/blogs/books/2012/05/language-wars-descriptivists.html#ixzz1wIAKZtld

About Income Inequality


Why Edward Conard Is Wrong About Income Inequality

Posted: 05/29/2012 8:15 am
Edward Conard has gotten a lot of press lately for writing a book that praises income inequality. Writing in the New York Times Magazine, Adam Davidson described Conard's argument this way: "If we had a little more of it, then everyone, particularly the 99 percent, would be better off." Conard is a former partner at Mitt Romney's private equity firm, Bain Capital, and a major contributor to Romney's presidential campaign. That gives readers of Unintended Consequences: Why Everything You've Been Told About The Economy Is Wrong the thrill of being privy to opinions Romney may well share but dare not say out loud.
So the biggest surprise, on opening Unintended Consequences, lies in discovering that this book isn't about income inequality at all. It's basically a defense of the investment class, whom Conard is anxious to absolve from all blame for the subprime crash of 2008. There's quite a lot in the book about banks and regulators and taxes, but not very much about the middle class, the logical focus of any serious discussion of the 33-year run-up in income inequality. And while it's true that Conard believes America needs to shovel more cash to the rich so they'll do the rest of us the great favor of investing it, only at the end does he flesh out, in a chapter titled "Redistributing Income," why he thinks redistribution is a lousy idea.
It feels a little unfair for me to write a column explaining what Conard doesn't understand income inequality in America, because it's a topic he never fully engages -- not even in the "Redistributing Income" chapter, which is mainly an argument against providing government benefits to the poor. On the other hand, income inequality is the subject about which Conard is talking as he travels the country marketing his book, because it's a central issue in the presidential campaign. That makes it hard to ignore the hasty and ill-informed arguments he makes about inequality in Unintended Consequences.
As I explain in my own recent book, The Great Divergence: America's Growing Inequality Crisis And What We Can Do About It (Bloomsbury), from the early 1930s through the late 1970s incomes in the United States either grew more equal or remained relatively stable in their distribution. Then, starting in 1979, incomes grew more unequal. Middle class incomes stagnated relative to their growth in the postwar era and also relative to productivity (i.e., output per man- or woman-hours worked), which had dwindled during the 1970s but grew starting in the 1980s and took off like a rocket in the aughts. Meanwhile, incomes for the affluent, which had grown at about the same rate during the postwar years as incomes for the middle class, started growing much faster, and incomes for the super-rich started growing much, much faster. (Incomes for the poor actually came up slightly over this 33-year period, but dropped precipitously when the recession hit.)
The Great Divergence is actually two divergences. There's a skills-based divergence between people whose education ended with high school and people who went on to get college (and, increasingly, graduate) degrees. And there's the divergence between the top 1 percent (especially the top 0.1 and 0.01 percent) and everyone else. Policy wonks sometimes argue over which of these divergences is more important, but looking back over 33 years it's clear that both have been extremely important.
The skills-based divergence is the more complex of the two. One factor is a shortage of skilled labor relative to the growing demand, reflected in the fact that during the 1970s America's high school graduation rate stopped climbing even as the computerization of the workplace increased skill demands. Another is the precipitous decline of the labor movement, which has been much greater than in other comparable countries because of anti-labor U.S. government policies. Another is a variety of other things the government did, including the setting of interest rates, raising or not raising the minimum wage, changes in taxes and benefit programs, etc. (Interestingly, tax policy didn't loom nearly so large as you might suspect.) Trade played a growing role, but it wasn't a major factor until the early aughts, when China emerged as a principal trade partner.
The 1 percent versus 99 percent divergence is much easier to explain. Compensation for top executives at nonfinancial firms became unhinged from economic reality, and the under-regulated finance industry ate the economy.
A key question about both kinds of inequality is why they are growing today when they didn't for most of the 20th century. Obviously part of the answer is that the Great Depression and World War II were national crises that disrupted the accumulation of wealth that had occurred during the 1920s. But that doesn't explain why incomes failed to grow more unequal from the late 1940s through the early 1970s, when the postwar economy prospered. Conard's answer is that "World War II destroyed Europe's and Japan's infrastructure. This weakened their ability to compete with the United States, and it took decades for these advanced economies to catch up." That's true, but only up to a point. Europe and Japan actually recovered from the war pretty quickly, thanks in part to the Marshall Plan. Meanwhile, these other countries were experiencing the same trend toward greater income equality observed in the U.S. More equal incomes weren't just America's prize for being king of the hill.
During the Great Divergence, Conard argues, growing immigration put downward pressure on wages. The timing works; immigration law was dramatically liberalized in 1965. But the education level of most of these immigrants was so low that economic studies have failed to demonstrate that expanded immigration affected incomes for any native-born group except high-school dropouts. When it comes to middle-class wages, which is what the Great Divergence is mostly about, immigration has had no notable effect.
Further downward wage pressure, Conard opines, came when "baby boomers and women flooded the workforce." But baby boomers are now well into middle and even old age. Some of them are now retiring. Yet the inequality trend marches on. Women's role is actually pretty murky. Although more women entered the workforce in the 1980s, there had always been a lot of women who worked. The more notable change was that women started getting better jobs, which should have resulted in greater income equality. To be sure, some of these jobs might otherwise have gone to men. But women didn't penetrate male-dominated Rust Belt industries very far, and it was these industries' decline that proved especially devastating to the middle class.
The most baffling and distressing aspect of the Great Divergence was the decoupling of median income from increases in productivity. Why should we throw money at the investment class, as Conard demands, if there's no benefit to the middle class commensurate with the prosperity that results? During the past decade median income has actually declined slightly while productivity has increased briskly. How does that even happen?
Conard says the puzzle is simple to resolve. The U.S. employs a larger proportion of its population than the European countries with which we are often compared, unfavorably, with regard to income distribution. We employ near-retirees, female part-time workers, young Latino immigrants, etc. "Obviously this group has lower productivity than the average U.S. worker," Conard writes. It isn't so obvious to me. In my experience female part-time workers are more productive than full-time workers, not less; most of the ones I know end up doing as much work as their full-time counterparts in half the time (and for half the pay). Near-retirees today are so robust that many people -- especially conservatives -- think we should raise the Social Security retirement age. Granted, unschooled immigrants lack education, and therefore are limited in the economic value they can contribute to the economy. But I wouldn't exactly call them unproductive. They work like demons.
More to the point, the U.S. has always employed near-retirees, female part-time workers, and at least some low-wage immigrants. Why would these "lower productivity" workers drag wages down today when they didn't during the 1960s? A much simpler and more logical explanation for why workers receive less economic benefit from their productivity is that organized labor is on the ropes.
The decline of labor likely also helps explain the rise of the top 1 percent, whose share of the nation's income has doubled during the Great Divergence. Broadly speaking, the 1 percent can be thought of as Capital while the 99 percent can be thought of as Labor. Is the portion of Gross Domestic Product going to labor smaller, relative to capital, than it used to be? Conard insists not, citing some figures from the Federal Reserve Bank of St. Louis. I recognize that there are differing ways to measure labor share versus capital share, but I tend to believe studies (unmentioned in Conard's book) that say labor share has declined. One of them, after all, was produced by the chief investment officer at JP Morgan Chase, which has every reason to pretend otherwise. In a July 2011 newsletter for Morgan clients, Michael Cembalest wrote, "U.S. labor compensation is now at a 50-year low relative to both company sales and U.S. GDP." From 2000 to 2007, Cembalest calculated, pretax profits for the Standard & Poor's 500 increased by 1.3 percent. Reductions in wages and benefits accounted for about 75 percent of that increase.
I could go on, but I won't. Conard is certainly right that a capitalist economy needs some income inequality in order to function. Effort and skill must be rewarded. The question is whether the U.S. needs to have so much more than it used to have, and so much more than other advanced industrialized economies have. Most especially we need to ask why income inequality must accelerate much more rapidly in the U.S. than elsewhere. Conard doesn't want to face these questions head-on, because he has a simpler brief. He wants America to reward even more than it does now the brave souls who put capital at risk. Never mind that at Bain Capital, Conard's former place of employment, capital was often invested with no downside risk at all. If America can't prosper while the top 1 percent doubles its income share, you have to wonder whether the problem really can be that these guys have too little cash to play with. I'm inclined to think it's because they have too much.

Monday, May 28, 2012

E.J. Dionne - Our Divided Political Heart

This is the book I've been waiting for: a presentation of our divided country rooted in American historical perspective.  We didn't get this way overnight.  There are historical roots, and Dionne, a liberal journalist, is just the person to explain.

Saturday, May 26, 2012

Joshua Foer - Moonwalking with Einstein A SUMMARY


Joshua  Foer – Moonwalking with Einstein

Did I miss it, or does the author not explain the meaning of the title?

The book is a magical mystery tour through the world of memory competition both US competition and world competition.  I had no idea there was such a world.  The author enters and wins the US competition.  

I understand the mnemonic concept of the Memory Palace.  I can’t see myself utilizing such a technique.  But then again, I have no desire to memorize the Pelham phone book, a deck of cards in 90 seconds, or the first 10,000 digits of Pi.

One of the refrains in the book is “anybody can do it.”  That is, anybody can master memory techniques to memorize huge mountains of data.  The question is: who wants to?  

The possible link between savants and people using mnemonic techniques is interesting.  The human mind is capable of incredible things, sure, whether because of genetics, brain damage, or learned techniques.

Cognitive psychology, brain research, is all the rage these days and rightly so.    Interesting stuff to be sure, but I don’t lie awake at night thinking about the results of some memory research at Florida State.

The old idea that memory---the proper retention and ordering of knowledge---is a vital instrument in the invention of new ideas is an provocative concept.  P. 12

Yes, Baby Boomers like me are always afraid of “losing their marbles.”  P. 12

The book’s history of memory and the importance of memory before printing is perhaps the best part of the book for this history major.

I feel sorry for S, the man who remembered too much.  Chapter Two

I can relate to the idea that it is what we forget rather than remember that makes us human.  P. 37

The “neuroplasticity” of the brain is much commented on these days.  P. 38

The business about chicken sexing is fascinating!  The way such expertise comes from experience is doubly fascinating.  Chapter Three

The magical number 7.  P. 56

How chess experts can look at the board quickly and make the correct move.  It’s all in context, not in analysis.    P. 65

Who we are and what we do is largely a function of memory.  P. 66

Chapter Four.  The most forgetful man in the world.  What a sad story.  How horrible it would be to be like this.

Once upon a time every literate person had memory training---considered a centerpiece of classical education on a par with logic, grammar, and rhetoric.  Students were taught not just what to remember but how to remember.  Those were the days!   P. 95

The difference between natural and artificial memory.  P. 96

The loci method.  P. 97

Fast reading vs. ruminating on what you’ve read.  P. 110

The phrase “in the first place” is a residue from the art of memory.  P. 123

I enjoy the discussion about Homer.  Did Homer write Homer?  P. 125-129

Chapter Seven.  How we’ve externalized our memories.  Socrates disdained writing.  He was wrong.  If someone had not written down what he said, we today would not know what he said.  P. 139

How punctuation was invented.  How fascinating!  P. 140-142

Reading is an act of remembering?  P. 143

Fascinating discussion of the addition of TOCs and indexes. Less reason to remember things.   P. 144-145

The author points out that we read quickly these days.  We value quantity over quality.  At least I do.  P. 147-148

We read and we read and we read and we forget what we read.  Yes, I am guilty.  P. 148

I would hate to record everything in my life like Gordon Bell.  How awful!  P. 156

There is a distinct “me” driving the bus.  P. 161

Chapter Eight:  The OK Plateau.  I understand and I understand the way to improve is thru focused practice: deliberate practice which is very hard.

I take it that students memorize less in school than they used to.  The author traces this back to the influence of Rousseau.  P. 192

Then we go thru William James and John Dewey.  One day I’ll read up on Dewey, that calcified old liberal.  P. 192-194

Memory and creativity are two sides of the same coin.  You need memory to be creative.  P. 202

“Memory is how we transmit virtues and values, and partake of a shared culture.  P. 208

The people we admire have facts, anecdotes, quotes at their disposal.  It enlivens their conversation.  You can’t do this without remembering things.  P. 209

The intertwining of senses of the little rainmen sounds non-appetizing to me.  P. 216

I do not want to be a savant.  No little rainman in me!

No memory competition for me.  I just want to remember names, faces, and things I’m supposed to remember in everyday life.

The Immorality of Conservatives


Op-Ed Columnist

Egos and Immorality

In the wake of a devastating financial crisis, President Obama has enacted some modest and obviously needed regulation; he has proposed closing a few outrageous tax loopholes; and he has suggested that Mitt Romney’s history of buying and selling companies, often firing workers and gutting their pensions along the way, doesn’t make him the right man to run America’s economy.

Paul Krugman 

Wall Street has responded — predictably, I suppose — by whining and throwing temper tantrums. And it has, in a way, been funny to see how childish and thin-skinned the Masters of the Universe turn out to be. Remember when Stephen Schwarzman of the Blackstone Group compared a proposal to limit his tax breaks to Hitler’s invasion of Poland? Remember when Jamie Dimon of JPMorgan Chase characterized any discussion of income inequality as an attack on the very notion of success? 

But here’s the thing: If Wall Streeters are spoiled brats, they are spoiled brats with immense power and wealth at their disposal. And what they’re trying to do with that power and wealth right now is buy themselves not just policies that serve their interests, but immunity from criticism. 

Actually, before I get to that, let me take a moment to debunk a fairy tale that we’ve been hearing a lot from Wall Street and its reliable defenders — a tale in which the incredible damage runaway finance inflicted on the U.S. economy gets flushed down the memory hole, and financiers instead become the heroes who saved America. 

Once upon a time, this fairy tale tells us, America was a land of lazy managers and slacker workers. Productivity languished, and American industry was fading away in the face of foreign competition.
 
 
You can see why Wall Street likes this story. But none of it — except the bit about the Gekkos and the Romneys making lots of money — is true. 

For the alleged productivity surge never actually happened. In fact, overall business productivity in America grew faster in the postwar generation, an era in which banks were tightly regulated and private equity barely existed, than it has since our political system decided that greed was good. 

What about international competition? We now think of America as a nation doomed to perpetual trade deficits, but it was not always thus. From the 1950s through the 1970s, we generally had more or less balanced trade, exporting about as much as we imported. The big trade deficits only started in the Reagan years, that is, during the era of runaway finance. 

And what about that trickle-down? It never took place. There have been significant productivity gains these past three decades, although not on the scale that Wall Street’s self-serving legend would have you believe. However, only a small part of those gains got passed on to American workers. 

So, no, financial wheeling and dealing did not do wonders for the American economy, and there are real questions about why, exactly, the wheeler-dealers have made so much money while generating such dubious results. 

Those are, however, questions that the wheeler-dealers don’t want asked — and not, I think, just because they want to defend their tax breaks and other privileges. It’s also an ego thing. Vast wealth isn’t enough; they want deference, too, and they’re doing their best to buy it. It has been amazing to read about erstwhile Democrats on Wall Street going all in for Mitt Romney, not because they believe that he has good policy ideas, but because they’re taking President Obama’s very mild criticism of financial excesses as a personal insult. 

And it has been especially sad to see some Democratic politicians with ties to Wall Street, like Newark’s mayor, Cory Booker, dutifully rise to the defense of their friends’ surprisingly fragile egos. 

As I said at the beginning, in a way Wall Street’s self-centered, self-absorbed behavior has been kind of funny. But while this behavior may be funny, it is also deeply immoral. 

Think about where we are right now, in the fifth year of a slump brought on by irresponsible bankers. The bankers themselves have been bailed out, but the rest of the nation continues to suffer terribly, with long-term unemployment still at levels not seen since the Great Depression, with a whole cohort of young Americans graduating into an abysmal job market. 

And in the midst of this national nightmare, all too many members of the economic elite seem mainly concerned with the way the president apparently hurt their feelings. That isn’t funny. It’s shameful.

As the Right-Wingers Try to Control Our Discourse Orwell-Style





Paul Krugman - New York Times Blog

May 26, 2012, 12:11 pm

The New Political Correctness

Remember the furor over liberal political correctness? Yes, some of it was over the top — but it was mainly silly, not something that actually warped our national discussion.

Today, however, the big threat to our discourse is right-wing political correctness, which — unlike the liberal version — has lots of power and money behind it. And the goal is very much the kind of thing Orwell tried to convey with his notion of Newspeak: to make it impossible to talk, and possibly even think, about ideas that challenge the established order.

Thus, even talking about “the wealthy” brings angry denunciations; we’re supposed to call them “job creators”. Even talking about inequality is “class warfare”.

And then there’s the teaching of history. Eric Rauchway has a great post about attacks on the history curriculum, in which even talking about “immigration and ethnicity” or “environmental history” becomes part of a left-wing conspiracy. As he says, he’ll name his new course “US History: The Awesomeness of Awesome Americans.” That, after all, seems to be the only safe kind of thing to say.

Actually, this reminds me of an essay I read a long time ago about Soviet science fiction. The author — if anyone remembers where this came from — noted that most science fiction is about one of two thoughts: “if only”, or “if this goes on”. Both were subversive, from the Soviet point of view: the first implied that things could be better, the second that there was something wrong with the way things are. So stories had to be written about “if only this goes on”, extolling the wonders of being wonderful Soviets.

And now that’s happening in America.

Friday, May 25, 2012

Downtown Auburn 1942

CHECK OUT this fantastic vintage Tiger Theater photo from 1942! The billboard advertisement for The Major and the Minor (starring Ginger Rogers and Ray Milland) got us wondering about YOUR favorite movie memories at Auburn. Share, share share with us!

Thursday, May 24, 2012

The Decline of Print

As if we didn't need further evidence of the decline of print in our culture, we get it today when we hear that our local paper, the Birmingham News, will publish only 3 times a week starting in the fall.  Otherwise, the news will be digital.  I am used to accessing my news online, but the decline of newspapers continues to depress me.

Wednesday, May 23, 2012

About the Multiverse

Content Section
In Newsweek Magazine

Brian Greene: Welcome to the Multiverse

The latest developments in cosmology point toward the possibility that our universe is merely one of billions

“What really interests me is whether God had any choice in creating the world.” 
That’s how Albert Einstein, in his characteristically poetic way, asked whether our universe is the only possible universe.
 
The reference to God is easily misread, as Einstein’s question wasn’t theological. Instead, Einstein wanted to know whether the laws of physics necessarily yield a unique universe—ours—filled with galaxies, stars, and planets. Or instead, like each year’s assortment of new cars on the dealer’s lot, could the laws allow for universes with a wide range of different features? And if so, is the majestic reality we’ve come to know—through powerful telescopes and mammoth particle colliders—the product of some random process, a cosmic roll of the dice that selected our features from a menu of possibilities? Or is there a deeper explanation for why things are the way they are?
In Einstein’s day, the possibility that our universe could have turned out differently was a mind-bender that physicists might have bandied about long after the day’s more serious research was done. But recently, the question has shifted from the outskirts of physics to the mainstream. And rather than merely imagining that our universe might have had different properties, proponents of three independent developments now suggest that there are other universes, separate from ours, most made from different kinds of particles and governed by different forces, populating an astoundingly vast cosmos.
The multiverse, as this vast cosmos is called, is one of the most polarizing concepts to have emerged from physics in decades, inspiring heated arguments between those who propose that it is the next phase in our understanding of reality, and those who claim that it is utter nonsense, a travesty born of theoreticians letting their imaginations run wild.
So which is it? And why should we care? Grasping the answer requires that we first come to grips with the big bang.
In Search of the Bang
In 1915, Einstein published the most important of all his works, the general theory of relativity, which was the culmination of a 10-year search to understand the force of gravity. The theory was a marvel of mathematical beauty, providing equations that could explain everything from the motion of planets to the trajectory of starlight with stupendous accuracy.
Within a few short years, additional mathematical analyses concluded that space itself is expanding, dragging each galaxy away from every other. Though Einstein at first strongly resisted this startling implication of his own theory, observations of deep space made by the great American astronomer Edwin Hubble in 1929 confirmed it. And before long, scientists reasoned that if space is now expanding, then at ever earlier times the universe must have been ever smaller. At some moment in the distant past, everything we now see—the ingredients responsible for every planet, every star, every galaxy, even space itself—must have been compressed to an infinitesimal speck that then swelled outward, evolving into the universe as we know it.
multiverse-fe01-main-tease
Direct evidence for the multiverse might come from a collision between our expanding universe and its neighbors. (Mehau Kulyk / Photo Researchers, Inc.)
The big-bang theory was born. During the decades that followed, the theory would receive overwhelming observational support. Yet scientists were aware that the big-bang theory suffered from a significant shortcoming. Of all things, it leaves out the bang. Einstein’s equations do a wonderful job of describing how the universe evolved from a split second after the bang, but the equations break down (similar to the error message returned by a calculator when you try to divide 1 by 0?) when applied to the extreme environment of the universe’s earliest moment. The big bang thus provides no insight into what might have powered the bang itself.
Fuel for the Fire
In the 1980s, physicist Alan Guth offered an enhanced version of the big-bang theory, called inflationary cosmology, which promised to fill this critical gap. The centerpiece of the proposal is a hypothetical cosmic fuel that, if concentrated in a tiny region, would drive a brief but stupendous outward rush of space—a bang, and a big one at that. In fact, mathematical calculations showed that the burst would have been so intense that tiny jitters from the quantum realm would have been stretched enormously and smeared clear across space. Like overextended spandex showing the pattern of its weave, this would yield a precise pattern of miniscule temperature variations, slightly hotter spots and slightly colder spots dotting the night sky. In the early 1990s, NASA’s Cosmic Microwave Background Explorer satellite first detected these temperature variations, garnering Nobel Prizes for team leaders John Mather and George Smoot.
Remarkably, mathematical analysis also revealed—and here’s where the multiverse enters—that as space expands the cosmic fuel replenishes itself, and so efficiently that it is virtually impossible to use it all up. Which means that the big bang would likely not be a unique event. Instead, the fuel would not only power the bang giving rise to our expanding realm, but it would power countless other bangs, too, each yielding its own separate, expanding universe. Our universe would then be a single expanding bubble inhabiting a grand cosmic bubble bath of universes—a multiverse.

It’s a striking prospect. If correct, it would provide the capstone on a long series of cosmic reappraisals. We once thought our planet was the center of it all, only to realize that we’re one of many planets orbiting the sun, only then to learn that the sun, parked in a suburb of the Milky Way, is one of hundreds of billions of stars in our galaxy, only then to find that the Milky Way is one of hundreds of billions of galaxies inhabiting the universe. Now, inflationary cosmology was suggesting that our universe, filled with those billions of galaxies, stars, and planets, might merely be one of many occupying a vast multiverse.
Yet, when the multiverse was proposed back in the 1980s by pioneers Andrei Linde and Alexander Vilenkin, the community of physicists shrugged. The other universes, even if they existed, would stand outside what we can observe—we only have access to this universe. Apparently, then, they wouldn’t affect us and we wouldn’t affect them. So what role could other universes possibly play in science, a discipline devoted to explaining what we do see?
And that’s where things stood for about a decade, until an astounding astronomical observation suggested an answer.
The Mystery of Dark Energy
Although the discovery that space is expanding was revolutionary, there was one aspect of the expansion that most everyone took for granted. Just as the pull of earth’s gravity slows the ascent of a ball tossed upward, the gravitational pull of each galaxy on every other must be slowing the expansion of space.
In the 1990s, two teams of astronomers set out to measure the rate of this cosmic slowdown. Through years of painstaking observations of distant galaxies, the teams collected data on how the expansion rate of space has changed over time. And when they completed the analysis, they all nearly fell out of their chairs. Both teams found that, far from slowing down, the expansion of space went into overdrive about 7 billion years ago and has been speeding up ever since. That’s like gently tossing a ball upward, having it slow down initially, but then rocket upward ever more quickly.
The result sent scientists across the globe scurrying to explain the cosmic speedup. What force could be driving every galaxy to rush away from every other faster and faster? The most promising answer comes to us from an old idea of Einstein’s. We’re all used to gravity being a force that does only one thing: pull objects toward each other. But in Einstein’s general theory of relativity, gravity can also do something else: it can push things apart. How? Well, the gravity exerted by familiar objects like the moon, the earth, and the sun is surely attractive. But Einstein’s equations show that if space contains something else—not clumps of matter but an invisible energy, sort of like an invisible mist that’s uniformly spread through space—then the gravity exerted by the energy mist would be repulsive.
Which is just what we need to explain the observations. The repulsive gravity of an invisible energy mist filling space—we now call it dark energy—would push every galaxy away from every other, driving the expansion to speed up, not slow down.
But there’s a hitch. When the astronomers deduced how much dark energy would have to permeate every nook and cranny of space to account for the observed cosmic speedup, they found a number that no one has been able to explain. Not even close. Expressed in the relevant units, the dark-energy density is extraordinarily small:
.0000000000000000000000000000000000
00000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000138.
At the same time, attempts by researchers to calculate the amount of dark energy from the laws of physics have yielded results that are typically a hundred orders of magnitude larger, perhaps the greatest mismatch between observation and theory in the history of science.
And that has led to some soul searching.
Physicists have long believed that with sufficient hard work, experimentation, and industrious calculation, no detail about the fundamental makeup of reality would lie beyond scientific explanation. Certainly, many details still lack an explanation, such as the masses of particles like electrons and quarks. Yet the expectation has been that in due course physicists will find explanations.
The spectacular failure of attempts to explain the amount of dark energy has raised questions about this confidence, driving some physicists to pursue a radically different explanatory approach, one that suggests (once again) the possible existence of a multiverse.
The Multiverse Solution
The new approach has scientific roots that stretch back to the early 1600s, when the great astronomer Johannes Kepler was obsessed with understanding a different number: the 93 million miles between the sun and the earth. Kepler struggled for years to explain this distance but never succeeded, and from our modern perch the reason is clear. We now know that there are a great many planets, orbiting their host stars at a great many different distances, demonstrating the fallacy in Kepler’s quest—the laws of physics do not single out any particular distances as special. Instead, what distinguishes the earth-sun distance is simply that it yields conditions hospitable to life: were we much closer or farther from the sun, the extreme temperatures would prevent our form of life from taking hold. So, although Kepler was on a wild goose chase in seeking a fundamental explanation for the earth-sun distance, there is an explanation for why we humans find ourselves at such a distance.
In seeking an explanation for the value of dark energy, maybe we’ve been making a mistake analogous to Kepler’s. Our best cosmological theory—the inflationary theory—naturally gives rise to other universes. Perhaps, then, just as there are many planets orbiting stars at many different distances, maybe there are many universes containing many different amounts of dark energy. If so, asking the laws of physics to explain one particular value of dark energy would be just as misguided as trying to explain one particular planetary distance. Instead, the right question to ask would be: why do we humans find ourselves in a universe with the particular amount of dark energy we’ve measured, instead of any of the other possibilities?
This is a question we can address. In universes with larger amounts of dark energy, whenever matter tries to clump into galaxies, the repulsive push of the dark energy is so strong that the clump gets blown apart, thwarting galactic formation. In universes whose dark-energy value is much smaller, the repulsive push changes to an attractive pull, causing those universes to collapse back on themselves so quickly that again galaxies wouldn’t form. And without galaxies, there are no stars, no planets, and so in those universes there’s no chance for our form of life to exist.
And so we find ourselves in this universe and not another for much the same reason we find ourselves on earth and not on Neptune—we find ourselves where conditions are ripe for our form of life. Even without being able to observe the other universes, their existence would thus play a scientific role: the multiverse offers a solution to the mystery of dark energy, rendering the quantity we observe understandable.
Or so that’s what multiverse proponents contend.
Many others find this explanation unsatisfying, silly, even offensive, asserting that science is meant to give definitive, precise, and quantitative explanations, not “just so” stories.
But the essential counterpoint is that if the feature you’re trying to explain can and does take on a wide variety of different mathematical values across the landscape of reality, then seeking a definitive explanation for one value is wrongheaded. Just as it makes no sense to ask for a definitive prediction of the distance at which planets orbit their host stars, since there are many possible distances, if we’re part of a multiverse it would make no sense to ask for a definitive prediction of the value of dark energy, since there would be many possible values.
The multiverse doesn’t change the scientific method or lower explanatory standards. But it does ask us to reevaluate whether we’ve mistakenly posed the wrong questions.
Hanging by Strings
Of course, for this approach to succeed, we must be sure that among the multiverse’s many different dark-energy values is the very one we’ve measured. And that’s where a third line of investigation, string theory, comes to the fore.
String theory is an attempt to realize Einstein’s dream of a “unified theory” capable of stitching all matter and forces into a single mathematical tapestry. Initially formulated in the late 1960s, the theory envisions that deep inside every fundamental particle is a tiny, vibrating, stringlike filament of energy. And much as the different vibrational patterns of a violin string yield different musical notes, so the different vibrational patterns of these tiny strings would yield different kinds of particles.
Pioneers of the subject anticipated that string theory’s rigid mathematical architecture would soon yield a single set of definitive, testable predictions. But as the years passed, detailed analysis of the theory’s equations revealed numerous solutions, each representing a different possible universe. And numerous means numerous. Today, the tally of possible universes stands at the almost incomprehensible 10500, a number so large it defies analogy.
For some string-theory advocates, this stupendous failure to yield a unique universe—ours—was a devastating blow. But to those advancing the multiverse, string theory’s enormous diversity of possible universes has proven vital.
Just as it takes a well-stocked shoe store to guarantee you’ll find your size, only a well-stocked multiverse can guarantee that our universe, with its peculiar amount of dark energy, will be represented. On its own, inflationary cosmology falls short of the mark. While its never-ending series of big bangs would yield an immense collection of universes, many would have similar features, like a shoe store with stacks and stacks of sizes 5 and 13, but nothing in the size you seek.
By combining inflationary cosmology and string theory, however, the stock room of universes overflows: in the hands of inflation, string theory’s enormously diverse collection of possible universes become actual universes, brought to life by one big bang after another. Our universe is then virtually guaranteed to be among them. And because of the special features necessary for our form of life, that’s the universe we inhabit.
High-Risk Science
Years ago, Carl Sagan emphasized that extraordinary claims require extraordinary evidence. So, can we gather evidence supporting a proposal that invokes other universes?
Because the other universes would lie beyond what we can observe, it might seem that the answer is no, placing the multiverse outside the bounds of science. But that’s too quick. Evidence for a proposal can be amassed even when some of its important features are inaccessible.
Take black holes. Scientists routinely use general relativity to speak with confidence about what happens inside a black hole, even though nothing, not even light, can escape a black hole’s interior, rendering such regions unobservable. The justification is that once a theory makes a slew of accurate predictions about things we can observe, as general relativity has, we justifiably gain confidence in the theory’s predictions about things we can’t observe.
Similarly, if a proposal that invokes the multiverse gains our confidence by making correct predictions about things we do have access to, things in our universe, then our confidence in its prediction of other universes, realms we don’t have access to, would rightly grow too.
As of today, we are far from crossing this threshold. Inflationary cosmology makes accurate predictions about microwave background radiation; dark energy accurately explains accelerated expansion. But string theory remains hypothetical, largely because its primary distinguishing features become manifest at scales billions of times smaller than we can probe even with today’s most powerful accelerators.
More direct evidence for the multiverse might come from potential collisions between our expanding universe and its neighbors. Such a cosmic fender bender would generate an additional pattern of temperature variations in the microwave background radiation that sophisticated telescopes might one day detect. Many consider this the most promising possibility for finding evidence in support of the multiverse.
That there are ways, long shots to be sure, to test the multiverse proposal reflects its origin in careful mathematical analysis. Nevertheless, because the proposal is unquestionably tentative, we must approach it with healthy skepticism and invoke its explanatory framework judiciously.
Imagine that when the apple fell on Newton’s head, he wasn’t inspired to develop the law of gravity, but instead reasoned that some apples fall down, others fall up, and we observe the downward variety simply because the upward ones have long since departed for outer space. The example is facetious but the point serious: used indiscriminately, the multiverse can be a cop-out that diverts scientists from seeking deeper explanations. On the other hand, failure to consider the multiverse can place scientists on a Keplerian treadmill in which they furiously chase answers to unanswerable questions.
Which is all just to say that the multiverse falls squarely in the domain of high-risk science. There are numerous developments that could weaken the motivation for considering it, from scientists finally calculating the correct dark-energy value, or confirming a version of inflationary cosmology that only yields a single universe, or discovering that string theory no longer supports a cornucopia of possible universes. And so on.
But as with all rational bets, high risk comes with the potential for high reward. During the past five centuries we’ve used the power of observation and mathematical calculation to shatter misconceptions. From a quaint, small, earth-centered universe to one filled with billions of galaxies, the journey has been both thrilling and humbling. We’ve been compelled to relinquish sacred belief in our own centrality, but with such cosmic demotion we’ve demonstrated the capacity of the human intellect to reach far beyond the confines of ordinary experience to reveal extraordinary truth. The multiverse proposal might be wrong. But it might also be the next step in this journey, unveiling a breathtaking panorama of universes populating a vast cosmic landscape. For some scientists, including me, that possibility makes the risk well worth taking.

Tuesday, May 22, 2012

More Conservative Fantasyland



The Conservative Fantasy History of Civil Rights

Ronald Reagan, Strom Thurmond, and other civil rights heroes. Not pictured: black people.
 
The civil rights movement, once a controversial left-wing fringe, has grown deeply embedded into the fabric of our national story. This is a salutary development, but a problematic one for conservatives, who are the direct political descendants of (and, in the case of some of the older members of the movement, the exact same people as) the strident opponents of the civil rights movement. It has thus become necessary for conservatives to craft an alternative story, one that absolves their own ideology of any guilt. The right has dutifully set itself to its task, circulating its convoluted version of history, honing it to the point where it can be repeated by any defensive College Republican in his dorm room. Kevin Williamson’s cover story in National Review is the latest version of what is rapidly congealing into conservatism’s revisionist dogma.
The mainstream, and correct, history of the politics of civil rights is as follows. Southern white supremacy operated out of the Democratic Party beginning in the nineteenth century, but the party began attracting northern liberals, including African-Americans, into an ideologically cumbersome coalition. Over time the liberals prevailed, forcing the Democratic Party to support civil rights, and driving conservative (and especially southern) whites out, where they realigned with the Republican Party.
Williamson crafts a tale in which the Republican Party is and always has been the greatest friend the civil rights cause ever had. The Republican takeover of the white South had absolutely nothing to do with civil rights, the revisionist case proclaims, except insofar as white Southerners supported Republicans because they were more pro-civil rights.
One factoid undergirding this bizarre interpretation is that the partisan realignment obviously took a long time to complete — Southerners still frequently voted Democratic into the seventies and eighties. This proves, according to Williamson, that a backlash against civil rights could not have driven southern whites out of the Democratic Party. “They say things move slower in the South — but not that slow,” he insists.
His story completely ignores the explicit revolt by conservative Southerners against the northern liberal civil rights wing, beginning with Strom Thurmond, who formed a third-party campaign in 1948 in protest against Harry Truman’s support for civil rights. Thurmond received 49 percent of the vote in Louisiana, 72 percent in South Carolina, 80 percent in Alabama, and 87 percent in Mississippi. He later, of course, switched to the Republican Party.
Thurmond’s candidacy is instructive. Democratic voting was deeply acculturated among southern whites as a result of the Civil War. When southern whites began to shake loose of it, they began at the presidential level, in protest against the civil rights leanings of the national wing. It took decades for the transformation to filter down, first to Congressional-level representation (Thurmond, who Williamson mentions only in his capacity as a loyal Democrat, finally switched to the GOP in 1964), and ultimately to local-level government. The most fervently white supremacist portions of the South were also the slowest to shed their Confederate-rooted one-party traditions. None of this slowness actually proves Williamson’s contention that the decline of the Democratic Party in the South was unrelated to race.
Williamson concedes, with inadvertently hilarious understatement, that the party “went through a long dry spell on civil-rights progress” — that would be the century that passed between Reconstruction and President Eisenhower’s minimalist response to massive resistance in 1957. But after this wee dry spell, the party resumed and maintained its natural place as civil rights champion. To the extent that Republicans replaced Democrats in the South, Williamson sees their support for civil rights as the cause. (“Republicans did begin to win some southern House seats, and in many cases segregationist Democrats were thrown out by southern voters in favor of civil-rights Republicans.”) As his one data point, Williamson cites the victory of George Bush in Texas over a Democrat who opposed the 1964 Civil Rights Act. He correctly cites Bush’s previous record of moderation on civil rights but neglects to mention that Bush also opposed the 1964 Civil Rights Act.
Williamson does feel obliged to mention Barry Goldwater’s opposition to the 1964 Civil Rights Act, but defends it as a “principled” opposition to the “extension of federal power.” At the same time, he savages southern Democrats for their opposition to the 14th and 15th Amendments, Reconstruction, anti-lynching laws, and so on. It does not seem to occur to him that many of these opponents also presented their case in exactly the same pro-states' rights, anti-federal power terms that Goldwater employed. Williamson is willing to concede that opponents of civil rights laws have philosophical principles behind them, but only if they are Republican. (Perhaps is the process by which figures like Thurmond and Jesse Helms were cleansed of their racism and became mere ideological opponents of federal intrusion.)
To the extent that the spirit of the all-white, pro-states' rights, rigidly “Constitutionalist” southern Democrats exists at all today, Williamson locates it not in the nearly all-white, pro-states' rights, rigidly “Constitutionalist” southern Republicans, but rather in the current Democratic Party. This is possibly the most mind-boggling claim in Williamson’s essay:
Democrats who argue that the best policies for black Americans are those that are soft on crime and generous with welfare are engaged in much the same sort of cynical racial calculation President Johnson was practicing when he informed skeptical southern governors that his plan for the Great Society was “to have them niggers voting Democratic for the next two hundred years.” Johnson’s crude racism is, happily, largely a relic of the past, but his strategy endures. 
The strategy of crude Democratic racism endures! That this strategy has sucked in more than 90 percent of the black electorate, and is currently being executed at the highest level by Barack Obama (who — at this point, it may be necessary to inform Williamson — is black) suggests a mind-blowing level of false consciousness at work among the African-American community.
Williamson does stumble on to one interesting vein of history, but completely misses its import. In the course of dismissing Goldwater’s 1964 opposition to the Civil Rights Act, he notes that the Republican Party declined to fully follow his lead. The party platform, he notes, called for “full implementation and faithful execution of the Civil Rights Act of 1964.” He does not mention that this language came after party conservatives rejected amendments with stronger language endorsing “enforcement” of the civil rights law and describing the protection of the right to vote as a “constitutional responsibility.” (A bit of this story can be found in Ben Wallace-Wells’s fantastic piece on George Romney in the current print issue, and more in Geoffrey Kabaservice’s “Rule and Ruin.”)
It is true that most Republicans in 1964 held vastly more liberal positions on civil rights than Goldwater. This strikes Williamson as proof of the idiosyncratic and isolated quality of Goldwater’s civil rights stance. What it actually shows is that conservatives had not yet gained control of the Republican Party.
But conservative Republicans — those represented politically by Goldwater, and intellectually by William F. Buckley and National Review — did oppose the civil rights movement. Buckley wrote frankly about his endorsement of white supremacy: “the White community in the South is entitled to take such measures as are necessary to prevail, politically and culturally, in areas in which it does not predominate numerically.” More often conservatives argued on grounds of states’ rights, or freedom of property, or that civil rights leaders were annoying hypocrites, or that they had undermined respect for the law.
Rick Perlstein surveyed the consistent hostility of contemporary conservatives to the civil rights movement. Ronald Reagan, like many conservatives, attributed urban riots to the breakdown in respect for authority instigated by the civil rights movement’s embrace of civil disobedience (a “great tragedy that began when we began compromising with law and order, and people started choosing which laws they'd break, thundered Reagan”). Buckley sneered at the double standard of liberal Democrats — in 1965, he complained, Vice-President Hubert Humphrey attended the funeral of a white woman shot by the Klan for riding in a car with a black man, but did not attend the funeral of a white cop shot by a black man. The right seethed with indignation at white northern liberals, decrying the fate of their black allies while ignoring the assaults mounted by blacks against whites.
And of course this sentiment — exactly this sentiment — right now constitutes the major way in which conservatives talk about race. McKay Coppins has a fine story about how conservative media has been reporting since 2009 on an imagined race war, a state of affairs in which blacks routinely assault whites, which is allegedly being covered up by authorities in the government and media. “In Obama's America, the white kids now get beat up with the black kids cheering,” said Rush Limbaugh in 2009.
We should not equate this particular line of hysteria with Buckley-esque defenses of white supremacy, or even with Goldwater-esque concern for states’ rights. The situation is obviously far more different than it is similar. Conservatives are not attacking measures to stop lynching or defending formal legal segregation. The racial paranoia of a Rush Limbaugh or an Andrew Breitbart – Williamson defends both – is far less violent or dangerous than the white racial paranoia of previous generations. That undeniable progress seems to be more tenable ground for Williamson to mount his defense of conservatism and race. Conservatives ought to just try arguing that, while conservatives were wrong to perceive themselves as victims of overweening government and racial double-standards before the civil rights movement triumphed, they are right to do so now.

They need to try something different, anyway.

The pseudo-historical attempt to attach conservatism to the civil rights movement is just silly. Here's another idea: Why not get behind the next civil rights idea (gay marriage) now? It would save future generations of conservative apparatchiks from writing tendentious essays insisting the Republican Party was always for it.

Joshua Foer - Moonwalking with Einstein (2)

I continue reading the book between innings at the Auburn-Florida game.  These memory competitors are like idiot savants.  There is no way I would ever want to get into that kind of competition.  These people have too much time on their hands.  What freaks.

Sunday, May 20, 2012

Joshua Foer - Moonwalking with Einstein

This is my kind of book: a popular treatment of the subject of memory written by a journalist citing both research and talking about his personal experience.  As a perpetual undergraduate with broad intellectual interests, this is right up my alley. 

Saturday, May 19, 2012

The Cause


Advertisement

Loving Liberals

‘The Cause,’ by Eric Alterman and Kevin Mattson

The trouble with liberals, Robert Kennedy complained in 1964, was that they were “in love with death” — they romanticized failure, finding greater nobility in losing the whole loaf than in winning half of it. In the years since then, liberals have not only lost a lot of loaves but have acquired a mess of other troubles, among them the difficulty of getting anyone to admit to being a liberal. To wear the label today seems an act of defiance, much as members of the gay rights community have appropriated, from their antagonists, the epithet “queer.” Liberalism — for decades (centuries, even) the prevailing philosophy in American political life — has become the creed that dare not speak its name, except late at night on MSNBC.
Illustration by Oliver Munday

THE CAUSE

The Fight for American Liberalism From Franklin Roosevelt to Barack Obama
By Eric Alterman and Kevin Mattson
561 pp. Viking. $32.95.

Enter Eric Alterman, defiant to the last. In 2008, this columnist and media critic published a handbook called “Why We’re Liberals,” a crisply written and emphatically argued retort to the Coulters, Hannitys and others for whom liberalism is a strain of fascism, totalitarianism, socialism and overmothering (why choose?). Alterman’s new book, “The Cause,” written with an assist from the historian Kevin Mattson, is something of a companion volume: a history of liberalism from Franklin Roosevelt to the present. (Mattson’s role is a bit ambiguous; in the book’s acknowledgments, Alterman credits him with providing “raw material.”) 
 
Much of this unfolds, in “The Cause,” by inference, or as interstitial material between character sketches. This is less a book about liberalism than it is a book about liberals — stretch limousines full of them, fleet after fleet. Liberalism, Alterman suggests, is a movement of “many different faces,” and his book, at times, appears intent on showing them all: faces of intellectuals, faces of politicians, faces of protesters and filmmakers, philosophers and diplomats. 

There is an indiscriminate quality to Alterman’s attentions, which too often seem to reflect his personal passions rather than a careful weighing of a figure’s historical significance. Thus Oliver Stone gets just as much ink as Walter Reuther, a towering figure in the history of organized labor; Bruce Springsteen, about whom Alterman has written a previous book, receives more airtime than Hubert Humphrey and Thurgood Marshall combined. (Bob Dylan, meanwhile, merits only passing mentions.) Alterman’s choices can be interesting and even brave; one has to admire his willingness to include intellectuals like Lionel Trilling and Richard Rorty in a work of popular history. But in such a crowded field, their relative influence — and anyone else’s — becomes impossible to assess. 

The net effect is that of a Pointillist painting, though when you step back from the canvas and squint a little, the dots fail to cohere into a discernible image. As “The Cause” smash-cuts from Henry Wallace to Richard Hofstadter and from Gloria Steinem to Gary Hart, Alterman pauses all too infrequently to reflect on the “cause” — or causes, or ideals — that connects them. This, to be fair, is a challenge, one compounded by liberal schisms and by the nebulousness of much liberal thought; Trilling, as Alterman notes, described liberalism as “a large tendency rather than a concise body of doctrine.” Liberals, quite unlike leftist radicals or conservative ideologues, tend to reject dogma and theory in favor of “bold, persistent experimentation,” as Roosevelt called it, or, put another way, pragmatism grounded in enduring, yet evolving, values. It is hard to dissect a gestalt. 

Still, that is the historian’s role, and other books — most notably, in recent years, by Alan Brinkley and Paul Starr — have brought sharpness to the picture that “The Cause” renders blurry. Despite its author’s best intentions, “The Cause” makes it harder, not easier, to understand how liberals ever mustered the intellectual clarity or collective resolve not only to govern but to achieve what they manifestly did during their long reign at the vital center of our national life — or even, in a more qualified way, during the two dec­ades since Bill Clinton promised to “put people first.” 

As “The Cause” proceeds toward the present day, Alterman reveals a revanchist streak. Urging liberals to “recapture” Roosevelt’s “militant and optimistic spirit,” he casts a cold eye on virtually every effort, over the past 30 years, to do just that. The intimation of “The Cause” — of both its title and its tone — is that there really is a true faith against which subsequent vintages of liberalism must be judged (and found wanting). “Neo­liberals” like Gary Hart are dismissed as callow and cold; “New Democrats” of the late 1980s are overly in thrall to their corporate donors; and Michael Dukakis, poor Michael Dukakis, is not merely a loser but “no liberal at all — just a sign of the desperate times into which American liberalism had fallen in its apparently endless quest for solid political ground.” As for Clinton, Jimmy Carter and Barack Obama, the Democrats who have been elected president since Johnson, “The Cause” flays all three for yielding to “political pressures” and becoming “far more conservative” as president than as presidential candidates. 

Each of these points is arguable in its own right. But taken together, they reflect a contempt for compromise. Without proposing an alternative path, Alterman leaves liberals in a familiar dead end. This, regrettably, is the sort of peremptory judgment that holds liberalism back (just as the conservative equivalent, with its fixation on Reagan-era doctrines and its incantation of old pieties, binds the Republican Party in a kind of intellectual aspic). 

“The work goes on, the cause endures,” said Robert Kennedy’s brother Edward — one of the heroes of this book — in his stirring speech at the 1980 Democratic National Convention. But if it really is to endure, then the means of advancing it will surely have to evolve, taking full account of unpleasant realities: the scale of the debt; the depth of public suspicion not just of government but of most institutions; courts that have grown hostile to claims of civil rights and assertions of governmental power; and the tenuousness of our commitment to the common good. The work, indeed, goes on. 


Jeff Shesol, the author of “Supreme ­Power: Franklin Roosevelt vs. the Supreme Court,” was a speechwriter for President Bill Clinton.