This little book by an Auburn alum presents an informal history of Auburn.
On this first day of a new football season I remember James Owens, our first African American player.
Saturday, August 31, 2013
Thursday, August 29, 2013
Martin Amis says...
"The trouble with life ... is its amorphousness, its ridiculous fluidity. Look at it: thinly plotted, largely themeless, sentimental and ineluctably trite. The dialogue is poor, or at least violently uneven. The twists are either predictable or sensationalist. And it's always the same beginning; and the same ending."
Wednesday, August 28, 2013
The Meat Loaf
The only bad thing about today was the meat loaf for lunch. I took a chance and it didn't work out. Potato salad and meat loaf are a lot alike. If it's good, it's good. If it's bad, oh my, is there anything worse than bad meat loaf and potato salad? The best part of the lunch was the diet coke. That says it all.
Sunday, August 25, 2013
Blue Jasmine
I was pleasantly surprised by this movie: this is one of Woody Allen's best! It is really, really good. Great story, great location, great acting. Cate Blanchett is fantastic as Jasmine. I was only vaguely aware of this actress before now. She steals the show. She should receive an Academy Award nomination. I would vote for her! My only complaint is there isn't enough of San Francisco in the film. I would have more footage from the city by the bay. No matter. It's a great movie!
More Salinger Books Coming?
I read where there is a new J.D. Salinger biography coming soon and that the authors claim that there is additional Salinger fiction that will be published---novels, short stories, whatever. Okay. That's good, but I am hardly overly excited. Salinger is a good writer, rightly lauded, but I also add that he is overrated. The fact that he didn't publish anything for the last 50 of his life and didn't appear in public made him a true recluse. If he had published what's in the vault while he was living he probably wouldn't be as popular as he remains today.
Saturday, August 24, 2013
Elmore Leonard
Elmore Leonard: A Man of Few, Yet Perfect, WordsBy JANET MASLIN
Published: August 21, 2013
When “Freaky Deaky” came out in 1988, Elmore Leonard’s writing credo hadn’t quite kicked in yet. Though he would later deliver 10 great rules for writing with streamlined tough-guy elegance, the dedication for “Freaky Deaky” thanked his wife for giving him “a certain look when I write too many words.”
But even in 1988, not yet at his most terse, Mr. Leonard was garnering praise so high it defied belief. “Who else gets reviews like these?!” asked the back cover of his next book, “Killshot” (1989). Who, indeed. Among the mash notes cited were “No one writes better”; “It’s impossible not to love Elmore Leonard”; “Leonard is a national literary treasure”; “The most interesting author of crime fiction that we have ever had”; and “When a new Leonard book comes out, it’s like Christmas morning.”
Amazingly, that praise was fair. And it couldn’t even inspire resentment among other writers. Stephen King grudgingly approached his first Leonard book, suspicious of such a critics’ darling. But he picked up “Glitz” (1985), got hooked, and came up with the best no-nonsense description of the Leonard effect: “This is the kind of book that if you get up to see if there are any chocolate chip cookies left, you take it with you so you won’t miss anything.”
Mr. Leonard, who died on Tuesday, will forever be admired for the sheer irresistibility of the stories he told. But his legacy is much larger. He was the most influential, widely imitated crime writer of his era, and his career was a long one: more than 60 years.
After he had worked in advertising long enough to learn to appreciate brevity and catchiness, he began writing pulp westerns. They weren’t that different from the crime books that would come later. The talk was tight and crisp, the action even more so, though Mr. Leonard also kept readers slightly off balance.
“You come to see me. How do you know I’m here?” the title character is asked in “Valdez Is Coming” (1970).
“You or somebody else,” Valdez replies. “It doesn’t matter.”
Mr. Leonard’s books did most of their work through dialogue, some of it hard-boiled, some delectably funny. Either way, the syntax was contagious, to the point where Mr. Leonard’s writing voice echoes every time another crime writer drops a subject or pronoun, links unrelated clauses with just a comma.
Martin Amis called attention to Mr. Leonard’s much-copied use of the present participle: “Warren Ganz, living up in Manalapan” was his way of saying “Warren Ganz lived up in Manalapan.” Just as distinctive were his capsule descriptions, like this one from “Djibouti,” about Somali pirates: “They on the sauce gettin millions for their ransom notes.”
“Djibouti” was published in 2010, when Mr. Leonard was 85. He showed no signs of slowing down. Even the title had that Leonard snap — just pronounce it — and belonged in a league with “Maximum Bob,” “Get Shorty,” “Pronto,” “Tishomingo Blues,” “LaBrava” and a slew of other unforgettables. His characters, as ever, were prone to wildly unrealistic assessments of their own talents. “Djibouti” paired a gutsy, good-looking filmmaker (modeled on Kathryn Bigelow) with a 72-year-old sailor who thinks she can barely resist him. “Xavier LeBo believed was he 10 years younger, they’d be letting good times roll all over this boat,” the book explained. Or said. It was one of Mr. Leonard’s firm beliefs that “said” was the only verb that should be used with quotations.
Some of the best working crime writers, like George Pelecanos, Carl Hiaasen and Lee Child, have found ways to claim and use Mr. Leonard’s brand of conversational economy. Mr. Pelecanos shares the Leonard ear for street talk; Mr. Hiaasen the wild plotting skills and keen eye for low life; Mr. Child the premium placed on not wasting words. But there are lesser imitators who fail to realize that sounding Leonard-like isn’t enough. His flair is hard to borrow, because so much of it depends on what he did not write, not what he did. As with a Japanese line drawing, the bare space is as meaningful as the marks that have been made. There was great elegance to his elision.
Many of his books became movie-bait, but for most of his career, the novels came first. A comedy of errors like “Get Shorty” so easily lent itself to screen treatment that an adaptation was inevitable. But no matter how well a Leonard character like that book’s Chili Palmer is played (John Travolta fully inhabited him), the film versions couldn’t match the leanness of the books. There are good movies adapted from Leonard novels (“Out of Sight,” “52 Pick-Up,” “3:10 to Yuma,” filmed twice), but they don’t beat their sources.
Mr. Leonard’s endless resilience is one more kind of inspiration he leaves behind. He kept writing, and writing sharply, at an age when many authors are conspicuously past their prime. His prime never ended. And he never ceased to write with the verve of a young and vital mind.
Nor did his sense of humor ever leave him. In “Road Dogs” (2009) he tossed in this exchange between a priest and a gay gangster:
“Up to this time you’ve been chaste?”
“You mean, Father, by dudes? If I like a guy he don’t have to chase me.”
Friday, August 23, 2013
SCOTUS: Wrong on Voting Rights
.The New York Review of Books
by Justice John Paul Stevens
2→Bending Toward Justice: The Voting Rights Act and the Transformation of American Democracy
by Gary May
Basic Books, 314 pp., $28.99
Demonstrators from the Student Nonviolent Coordinating Committee about to be arrested by police for urging blacks to register to vote, Selma, Alabama, October 1963
In Bending Toward Justice, Professor Gary May describes a number of the conflicts between white supremacists in Alabama and nonviolent civil rights workers that led to the enactment of the Voting Rights Act of 1965—often just called the VRA. The book also describes political developments that influenced President Lyndon Johnson to support the act in 1965, and later events that supported the congressional reenactments of the VRA signed by President Richard Nixon in 1970, by President Gerald Ford in 1975, by President Ronald Reagan in 1982, and by President George W. Bush in 2006.
May’s eminently readable book is particularly timely because the Supreme Court, on June 25, 2013, issued its decision in Shelby County v. Holder, invalidating the portion of the 2006 enactment that retained the formula used in the 1965 act to determine which states and political subdivisions must obtain the approval of the Department of Justice, or the US District Court in the District of Columbia, before changes in their election laws may become effective. That formula imposed a “preclearance” requirement on states that had maintained a “test or device” as a prerequisite to voting on November 1, 1964, and had less than a 50 percent voter registration or turnout in the 1964 presidential election. Alabama, where Shelby County is located, is one of those states. Over the dissent of Alabama-born Justice Hugo Black, the Court had upheld the preclearance provision shortly after the VRA was enacted in 1966 in South Carolina v. Katzenbach.
May’s book contains a wealth of information about the events that led to the enactment of the 1965 statute—and about the dedication and heroism of little-known participants in the events that came to national attention in 1964 and 1965. It includes both favorable and unfavorable information about well-known figures like Martin Luther King Jr. and J. Edgar Hoover, and about some of the methods used by whites to prevent blacks from voting and from registering to vote.
In his prologue the author makes it clear that in the 1960s in the Deep South not only were African-Americans prohibited from voting, but it was dangerous for them to attempt to do so. He describes the contrast between the oppression during the 1960s and the conditions a century earlier during the period that later became known as “Radical Reconstruction.” During the decade after the Civil War, when the South was divided into military districts occupied by federal troops, southern blacks enthusiastically embraced their newly acquired political freedom:
As many as two thousand served as state legislators, city councilmen, tax assessors, justices of the peace, jurors, sheriffs, and US marshals; fourteen black politicians entered the House of Representatives; and two became US Senators.
Although he does not identify the withdrawal of federal troops in 1876 as the principal cause of the change, May notes that by 1877 “southern white Democrats had overthrown every new state government and established state constitutions that stripped black citizens of their political rights.” Terrorist groups “like the Ku Klux Klan and the Knights of the White Camellia destroyed black schools and churches and murdered at will.”
State election laws that were enacted after 1877 were disastrous for black citizens. Whereas 130,000 blacks had been registered to vote in Louisiana in 1896, only 1,342 were registered to vote in 1904. In Alabama only 2 percent of eligible black adults were registered, and they risked serious reprisals if they attempted to exercise their right to vote. Black disenfranchisement, like segregation, was nearly complete throughout the South for well over sixty years. It was enforced not only by discriminatory laws, but also by official and unofficial uses of violence.
Writing for the five-man majority in Shelby County, the recently decided Supreme Court case challenging the VRA, Chief Justice John Roberts noted that “times have changed” since 1965. The tests and devices that blocked African-American access to the ballot in 1965 have been forbidden nationwide for over forty-eight years; the levels of registration and voting by African-Americans in southern states are now comparable to, or greater than, those of whites.
Moreover, the two southern cities, Philadelphia, Mississippi and Selma, Alabama, where the most publicized misconduct by white police officials occurred in 1964 and 1965, now have African-American mayors. In view of the changes that have occurred in the South, the majority concluded that the current enforcement of the preclearance requirement against the few states identified in the statute violates an unwritten rule requiring Congress to treat all of the states as equal sovereigns.
The Court’s heavy reliance on the importance of a “fundamental principle of equal sovereignty among the States,” while supported by language in an earlier opinion by Chief Justice Roberts, ignored the fact that Article I, Section 2 of the Constitution created a serious inequality among the states. That clause counted “three fifths” of a state’s slaves for the purpose of measuring the size of its congressional delegation and its representation in the Electoral College. That provision was offensive because it treated African-Americans as though each of them was equal to only three fifths of a white person, but it was even more offensive because it increased the power of the southern states by counting three fifths of their slaves even though those slaves were not allowed to vote. The northern states would have been politically better off if the slave population had been simply omitted from the number used to measure the voting power of the slave states.
The fact that this “slave bonus” created a basic inequality between the slave states and the free states has often been overlooked, as has its far-reaching impact. In 1800, for example, that bonus determined the outcome of the presidential election since it then gave the southern states an extra nine or ten votes in the Electoral College, and Thomas Jefferson prevailed over John Adams by only eight electoral votes. Because of the slave bonus, Adams served only one term as president.
The slave bonus unfairly enhanced the power of the southern states in Congress throughout the period prior to the Civil War. It was after the war that Section 2 of the Fourteenth Amendment, passed in 1868, put an end to the slave bonus. When the Fifteenth Amendment was ratified in 1870 during the Grant administration, the size of the southern states’ congressional delegations was governed by the number of citizens eligible to vote. Since that number included blacks as well as whites, during Reconstruction those states were no longer overrepresented in either Congress or the Electoral College.
After reconstruction ended, however, the terrorist tactics of the Ku Klux Klan and other groups devoted to the cause of white supremacy effectively prevented any significant voting at all by African-Americans, thus replacing a pre-war three-fifths bonus with a post-Reconstruction bonus of 100 percent of the nonvoting African-Americans. Thus, for almost a century—until the VRA was enacted during President Johnson’s administration—the southern states’ representation in Congress was significantly larger than it should have been.
Both the underrepresentation of blacks and the overrepresentation of white supremacists in the South during that period contradict the notion that the “fundamental principle of equal sovereignty among the States” is a part of our unwritten Constitution. As Justice Ginsburg pointed out in her largely unanswered dissent in the Shelby County case, the Court in its opinion upholding the original 1965 Voting Rights Act
held, in no uncertain terms, that the principle [of equal sovereignty] “applies only to the terms upon which States are admitted to the Union, and not to the remedies for local evils which have subsequently appeared.”
Except for his reference to the fact that the first century of congressional enforcement of the Fifteenth Amendment’s guarantee of the right to vote “can only be regarded as a failure,” Chief Justice Roberts’s opinion gives the reader the impression that the Voting Rights Act was Congress’s response to a specific problem that developed in the 1890s. Parroting Chief Justice Earl Warren’s opinion in South Carolina v. Katzenbach, Chief Justice Roberts wrote:
In the 1890s, Alabama, Georgia, Louisiana, Mississippi, North Carolina, South Carolina, and Virginia began to enact literacy tests for voter registration and to employ other methods designed to prevent African-Americans from voting.
There is no reference in the opinion to anything that happened before 1890. By selecting two examples—Philadelphia, Mississippi and Selma, Alabama, where black mayors now preside—to illustrate the magnitude of the change that has taken place since 1965, however, Roberts ironically emphasizes the fact that the “tests or devices” that were used in the statute’s coverage formula were not the principal means by which white supremacists prevented blacks from voting.
The contrast between Roberts’s recent opinion and Justice Abe Fortas’s opinion in United States v. Price (1966), the case arising out of the Mississippi incident, is striking. While the Chief Justice’s opinion notes that “three men were murdered while working in the area to register African-American voters,” Justice Fortas explained that the murders occurred after the three men had been taken into custody and police officers had taken them to a rendezvous with fifteen conspirators to “punish” them. In discussing the statutory issues presented by the case, Justice Fortas noted that the
purpose and scope of the 1866 and 1870 enactments must be viewed against the events and passions of the time. The Civil War had ended in April 1865. Relations between Negroes and whites were increasingly turbulent. Congress had taken control of the entire governmental process in former Confederate States…. For a few years “radical” Republicans dominated the governments of the Southern States and Negroes played a substantial political role. But countermeasures were swift and violent. The Ku Klux Klan was organized by southern whites in 1866 and a similar organization appeared with the romantic title of the Knights of the White Camelia. In 1868 a wave of murders and assaults was launched including assassinations designed to keep Negroes from the polls.
Nothing that happened before the 1890s is even mentioned in Roberts’s opinion for the Court in the Shelby County case.
by Justice John Paul Stevens
2→Bending Toward Justice: The Voting Rights Act and the Transformation of American Democracy
by Gary May
Basic Books, 314 pp., $28.99
Demonstrators from the Student Nonviolent Coordinating Committee about to be arrested by police for urging blacks to register to vote, Selma, Alabama, October 1963
In Bending Toward Justice, Professor Gary May describes a number of the conflicts between white supremacists in Alabama and nonviolent civil rights workers that led to the enactment of the Voting Rights Act of 1965—often just called the VRA. The book also describes political developments that influenced President Lyndon Johnson to support the act in 1965, and later events that supported the congressional reenactments of the VRA signed by President Richard Nixon in 1970, by President Gerald Ford in 1975, by President Ronald Reagan in 1982, and by President George W. Bush in 2006.
May’s eminently readable book is particularly timely because the Supreme Court, on June 25, 2013, issued its decision in Shelby County v. Holder, invalidating the portion of the 2006 enactment that retained the formula used in the 1965 act to determine which states and political subdivisions must obtain the approval of the Department of Justice, or the US District Court in the District of Columbia, before changes in their election laws may become effective. That formula imposed a “preclearance” requirement on states that had maintained a “test or device” as a prerequisite to voting on November 1, 1964, and had less than a 50 percent voter registration or turnout in the 1964 presidential election. Alabama, where Shelby County is located, is one of those states. Over the dissent of Alabama-born Justice Hugo Black, the Court had upheld the preclearance provision shortly after the VRA was enacted in 1966 in South Carolina v. Katzenbach.
May’s book contains a wealth of information about the events that led to the enactment of the 1965 statute—and about the dedication and heroism of little-known participants in the events that came to national attention in 1964 and 1965. It includes both favorable and unfavorable information about well-known figures like Martin Luther King Jr. and J. Edgar Hoover, and about some of the methods used by whites to prevent blacks from voting and from registering to vote.
In his prologue the author makes it clear that in the 1960s in the Deep South not only were African-Americans prohibited from voting, but it was dangerous for them to attempt to do so. He describes the contrast between the oppression during the 1960s and the conditions a century earlier during the period that later became known as “Radical Reconstruction.” During the decade after the Civil War, when the South was divided into military districts occupied by federal troops, southern blacks enthusiastically embraced their newly acquired political freedom:
As many as two thousand served as state legislators, city councilmen, tax assessors, justices of the peace, jurors, sheriffs, and US marshals; fourteen black politicians entered the House of Representatives; and two became US Senators.
Although he does not identify the withdrawal of federal troops in 1876 as the principal cause of the change, May notes that by 1877 “southern white Democrats had overthrown every new state government and established state constitutions that stripped black citizens of their political rights.” Terrorist groups “like the Ku Klux Klan and the Knights of the White Camellia destroyed black schools and churches and murdered at will.”
State election laws that were enacted after 1877 were disastrous for black citizens. Whereas 130,000 blacks had been registered to vote in Louisiana in 1896, only 1,342 were registered to vote in 1904. In Alabama only 2 percent of eligible black adults were registered, and they risked serious reprisals if they attempted to exercise their right to vote. Black disenfranchisement, like segregation, was nearly complete throughout the South for well over sixty years. It was enforced not only by discriminatory laws, but also by official and unofficial uses of violence.
Writing for the five-man majority in Shelby County, the recently decided Supreme Court case challenging the VRA, Chief Justice John Roberts noted that “times have changed” since 1965. The tests and devices that blocked African-American access to the ballot in 1965 have been forbidden nationwide for over forty-eight years; the levels of registration and voting by African-Americans in southern states are now comparable to, or greater than, those of whites.
Moreover, the two southern cities, Philadelphia, Mississippi and Selma, Alabama, where the most publicized misconduct by white police officials occurred in 1964 and 1965, now have African-American mayors. In view of the changes that have occurred in the South, the majority concluded that the current enforcement of the preclearance requirement against the few states identified in the statute violates an unwritten rule requiring Congress to treat all of the states as equal sovereigns.
The Court’s heavy reliance on the importance of a “fundamental principle of equal sovereignty among the States,” while supported by language in an earlier opinion by Chief Justice Roberts, ignored the fact that Article I, Section 2 of the Constitution created a serious inequality among the states. That clause counted “three fifths” of a state’s slaves for the purpose of measuring the size of its congressional delegation and its representation in the Electoral College. That provision was offensive because it treated African-Americans as though each of them was equal to only three fifths of a white person, but it was even more offensive because it increased the power of the southern states by counting three fifths of their slaves even though those slaves were not allowed to vote. The northern states would have been politically better off if the slave population had been simply omitted from the number used to measure the voting power of the slave states.
The fact that this “slave bonus” created a basic inequality between the slave states and the free states has often been overlooked, as has its far-reaching impact. In 1800, for example, that bonus determined the outcome of the presidential election since it then gave the southern states an extra nine or ten votes in the Electoral College, and Thomas Jefferson prevailed over John Adams by only eight electoral votes. Because of the slave bonus, Adams served only one term as president.
The slave bonus unfairly enhanced the power of the southern states in Congress throughout the period prior to the Civil War. It was after the war that Section 2 of the Fourteenth Amendment, passed in 1868, put an end to the slave bonus. When the Fifteenth Amendment was ratified in 1870 during the Grant administration, the size of the southern states’ congressional delegations was governed by the number of citizens eligible to vote. Since that number included blacks as well as whites, during Reconstruction those states were no longer overrepresented in either Congress or the Electoral College.
After reconstruction ended, however, the terrorist tactics of the Ku Klux Klan and other groups devoted to the cause of white supremacy effectively prevented any significant voting at all by African-Americans, thus replacing a pre-war three-fifths bonus with a post-Reconstruction bonus of 100 percent of the nonvoting African-Americans. Thus, for almost a century—until the VRA was enacted during President Johnson’s administration—the southern states’ representation in Congress was significantly larger than it should have been.
Both the underrepresentation of blacks and the overrepresentation of white supremacists in the South during that period contradict the notion that the “fundamental principle of equal sovereignty among the States” is a part of our unwritten Constitution. As Justice Ginsburg pointed out in her largely unanswered dissent in the Shelby County case, the Court in its opinion upholding the original 1965 Voting Rights Act
held, in no uncertain terms, that the principle [of equal sovereignty] “applies only to the terms upon which States are admitted to the Union, and not to the remedies for local evils which have subsequently appeared.”
Except for his reference to the fact that the first century of congressional enforcement of the Fifteenth Amendment’s guarantee of the right to vote “can only be regarded as a failure,” Chief Justice Roberts’s opinion gives the reader the impression that the Voting Rights Act was Congress’s response to a specific problem that developed in the 1890s. Parroting Chief Justice Earl Warren’s opinion in South Carolina v. Katzenbach, Chief Justice Roberts wrote:
In the 1890s, Alabama, Georgia, Louisiana, Mississippi, North Carolina, South Carolina, and Virginia began to enact literacy tests for voter registration and to employ other methods designed to prevent African-Americans from voting.
There is no reference in the opinion to anything that happened before 1890. By selecting two examples—Philadelphia, Mississippi and Selma, Alabama, where black mayors now preside—to illustrate the magnitude of the change that has taken place since 1965, however, Roberts ironically emphasizes the fact that the “tests or devices” that were used in the statute’s coverage formula were not the principal means by which white supremacists prevented blacks from voting.
The contrast between Roberts’s recent opinion and Justice Abe Fortas’s opinion in United States v. Price (1966), the case arising out of the Mississippi incident, is striking. While the Chief Justice’s opinion notes that “three men were murdered while working in the area to register African-American voters,” Justice Fortas explained that the murders occurred after the three men had been taken into custody and police officers had taken them to a rendezvous with fifteen conspirators to “punish” them. In discussing the statutory issues presented by the case, Justice Fortas noted that the
purpose and scope of the 1866 and 1870 enactments must be viewed against the events and passions of the time. The Civil War had ended in April 1865. Relations between Negroes and whites were increasingly turbulent. Congress had taken control of the entire governmental process in former Confederate States…. For a few years “radical” Republicans dominated the governments of the Southern States and Negroes played a substantial political role. But countermeasures were swift and violent. The Ku Klux Klan was organized by southern whites in 1866 and a similar organization appeared with the romantic title of the Knights of the White Camelia. In 1868 a wave of murders and assaults was launched including assassinations designed to keep Negroes from the polls.
Nothing that happened before the 1890s is even mentioned in Roberts’s opinion for the Court in the Shelby County case.
On Keith Urban
Keith Urban is in town tonight. Darn it, I will have to miss it. My feet need soaking and I would miss Big Brother. Maybe next time.
Like · · Promote · Share.
Diane Bystrom and Mike Denison like this..Ernie Moss Geez Fred you could have taped BB on the VCR and taken your Epson Salt to the concert, so long as you carried a plastic pan. Monya could have carried water in her purse.20 hours ago · Like · 1..Moyna O'Riley Hudson @Ernie ~ my name is MOYNA.....I believe you've mixed me up with Monya Havekost.20 hours ago · Unlike · 1..Freddy Hudson Ernie, I don't think he knows how to work the VCR. He stopped learning technology in 1973.20 hours ago · Edited · Unlike · 1..Fred Hudson More like 1963.about an hour ago · Li
Like · · Promote · Share.
Diane Bystrom and Mike Denison like this..Ernie Moss Geez Fred you could have taped BB on the VCR and taken your Epson Salt to the concert, so long as you carried a plastic pan. Monya could have carried water in her purse.20 hours ago · Like · 1..Moyna O'Riley Hudson @Ernie ~ my name is MOYNA.....I believe you've mixed me up with Monya Havekost.20 hours ago · Unlike · 1..Freddy Hudson Ernie, I don't think he knows how to work the VCR. He stopped learning technology in 1973.20 hours ago · Edited · Unlike · 1..Fred Hudson More like 1963.about an hour ago · Li
Tuesday, August 20, 2013
Elmore Leonard
We're gonna miss Elmore Leonard. Sure we are. He was 87 but it seems like he left us pronto. Time for a glass of rum punch. Crime stories about low lifes makes the world go round. My only hobby is reading Elmore Leonard and now he's gone. How can I go on?
Monday, August 19, 2013
Fighting the Core: Republican Dumbing Down of America
War on the CoreBy BILL KELLER
Published: August 18, 2013 396 Comments
I respect, really I do, the efforts by political scientists and pundits to make sense of the current Republican Party. There is intellectual virtue in the search for historical antecedents and philosophical underpinnings.
I understand the urge to take what looks to a layman like nothing more than a mean spirit or a mess of contradictions and brand it. (The New Libertarianism! Burkean Revivalists!) But more and more, I think Gov. Bobby Jindal, Louisiana’s Republican rising star, had it right when he said his party was in danger of becoming simply “the stupid party.”
A case in point is the burgeoning movement to kill what is arguably the most serious educational reform of our lifetime. I’m talking about the Common Core, a project by a consortium of states to raise public school standards nationwide.
The Common Core, a grade-by-grade outline of what children should know to be ready for college and careers, made its debut in 2010, endorsed by 45 states. It is to be followed in the 2014-15 school year by new standardized tests that seek to measure more than the ability to cram facts or master test-taking tricks. (Some states, including New York, introduced early versions of the tougher tests this year.)
This is an ambitious undertaking, and there is plenty of room for debate about precisely how these standards are translated into classrooms. But the Common Core was created with a broad, nonpartisan consensus of educators, convinced that after decades of embarrassing decline in K-12 education, the country had to come together on a way to hold our public schools accountable. Come together it did — for a while.
The backlash began with a few of the usual right-wing suspects. Glenn Beck warned that under “this insidious menace to our children and to our families” students would be “indoctrinated with extreme leftist ideology.”
(Beck also appears to believe that the plan calls for children to be fitted with bio-wristbands and little cameras so they can be monitored at all times for corporate exploitation.)
Beck’s soul mate Michelle Malkin warned that the Common Core was “about top-down control engineered through government-administered tests and left-wing textbook monopolies.” Before long, FreedomWorks — the love child of Koch brothers cash and Tea Party passion — and the American Principles Project, a religious-right lobby, had joined the cause. Opponents have mobilized Tea Partyers to barnstorm in state capitals and boiled this complex issue down to an obvious slogan, “ObamaCore!”
There are Common Core critics on the left as well, who argue that the accountability movement makes teachers scapegoats for problems caused mainly by poverty. As one educator put it, less than half in jest, “The problem with national testing is that the conservatives hate national and the liberals hate testing.” Discomfort with the Core may grow when states discover, as New York did this month, that the tougher tests make their schools look bad. But overwhelmingly the animus against the standards comes from the right.
Some of this was inevitable. Local control of public schools, including the sacred right to keep them impoverished and ineffectual, is a fundamental tenet of the conservative canon. In an earlier day, more thoughtful Republicans — people who had actually read the Common Core standards and understood that the notion of a federal usurpation was a boogeyman — would have held the high ground against the noisy fringe.
Such conservatives still exist. William Bennett, President Reagan’s secretary of education and now a stalwart of right-wing radio, has defended the Common Core. So has Mike Huckabee, the former Arkansas governor who is a favorite of religious conservatives. Several Republican governors (including Jindal, though he seems to be wobbling) have stood by the Common Core. Conservative-leaning think tanks like the Manhattan Institute and the Fordham Institute have published sober, sensible arguments for the standards.
But today’s Republican Party lives in terror of its so-called base, the very loud, often paranoid, if-that-Kenyan-socialist-in-the-White-House-is-for-it-I’m-against-it crowd. In April the Republican National Committee surrendered to the fringe and urged states to renounce Common Core. The presidential aspirant Marco Rubio, trying to appease conservatives angry at his moderate stance on immigration, last month abandoned his support for the standards. And state by red state, the effort to disavow or defund is under way. Indiana has put the Common Core on hold. Michigan’s legislature cut off money for implementing the standards and is now contemplating pulling out altogether. Last month, Georgia withdrew from a 22-state consortium, one of two groups designing tests pegged to the new standards, ostensibly because of the costs. (The new tests are expected to cost about $29 per student; grading them is more labor-intensive because in addition to multiple-choice questions they include written essays and show-your-work math problems that will be graded by actual humans. “You’re talking about 30 bucks a kid, in an education system that now spends upwards of $9,000 or $10,000 per student per year,” said Michael Petrilli of the Fordham Institute.)
The Common Core is imperiled in Oklahoma, Utah, Alabama and Pennsylvania. All of the retreat, you will notice, has been in Republican-controlled states.
“The experts in education have been wrong before and have forced all kinds of bad ideas on local schools,” Petrilli concedes. “So I have some sympathy for people who say, Uh-oh, here we go again. But I think in this case the standards happen to be very good.”
“Even conservatives, evangelicals,” he said. “when they look at the standards, they tend to come away impressed.”
So let’s take a look at this fiendish federal plot to brainwash our children.
First, it is not federal. President Obama has used Race to the Top money to encourage states to embrace higher standards, but the Common Core was written under the auspices of the National Governors Association and the Council of Chief State School Officers, an effort that began in 2007, before Obama was elected. Some advocates of Common Core have actually implored Obama and his education secretary, Arne Duncan, just to stop talking about it, because their endorsement feeds the myth that this is a federal takeover.
Second, there is no national curriculum. The standards, which you can read here, describe a reasonable progression of learning from grade to grade, but leave it to state and local school officials to get there. The Common
Core is not an attempt to pack kids’ heads with an officially sanctioned list of facts, but to assure that they are able to read a complicated text and understand it, to recognize a problem and know how to solve it.
So, to pick an example at random, the Common Core says a third grader should be able to “describe characters in a story (e.g., their traits, motivations or feelings) and explain how their actions contribute to the sequence of events.” By eighth grade the student should be able to “analyze how particular lines of dialogue or incidents in a story or drama propel the action, reveal aspects of a character or provoke a decision.”
The Common Core does not dictate what stories these kids will be reading or what textbooks schools should use and does not prescribe reading lists, except for a few obvious essentials, including America’s founding documents and a bit of Shakespeare.
Third, the Common Core is not some new and untried pedagogical experiment. Much of it leans on traditional methods that have proved themselves over time. Kids are taught phonics in the early grades. They learn times tables and memorize the formulas for areas and volumes.
The standards encourage more use of informational texts and literary nonfiction to build background knowledge and vocabulary that will be useful in the real world. But the Common Core does not stint on literature. By the end of high school, nonfiction would account for 70 percent of the total reading material in all subjects. That still leaves a lot of room for the classics.
The Core does call for schools across the states to deliver their lessons in the same sequence. Does it really matter if children in Alabama and New Jersey start algebra in the same grade? It matters a lot to a kid who moves from Alabama to New Jersey. According to the National Center for Education Statistics, about 13 percent of children under 18 move each year, and the numbers are much higher for low-income, military and immigrant families.
Many of them lose their place in the educational order and never recover.
There is, in fact, an important national discussion to be had as the Common Core takes effect and schools begin reckoning with the results of tougher tests. What’s the right cutoff score for a passing grade? Do schools get credit for progress, even if they are performing below grade level? Should there be an opt-out provision for schools that are more experimental or that already have high college placement rates? How do the test results figure in evaluating individual teachers?
E. D. Hirsch, an advocate of the Common Core whose Core Knowledge Foundation distributes a widely used curriculum, warned in an interview that if the standards were not carefully implemented, schools could still end up emphasizing “mindless test prep” over substance.
“The Tea Party’s worried about the federal government,” he told me. “What they should be worried about is theeducation school professors and the so-called experts.”
But — as with that other demonic federal plot, Obamacare — the Republicans aren’t interested in making reform work. They just want it dead.
“Conservatives used to be in favor of holding students to high standards and an academic curriculum based on great works of Western civilization and the American republic,” two education scholars, Kathleen Porter-Magee and Sol Stern, wrote in National Review Online. “Aren’t they still?”
Good question.
The Republican Con Game (2)
One Reform, IndivisibleBy PAUL KRUGMAN
Published: August 18, 2013 684 Comments
Recent political reporting suggests that Republican leaders are in a state of high anxiety, trapped between an angry base that still views Obamacare as the moral equivalent of slavery and the reality that health reform is the law of the land and is going to happen.
But those leaders don’t deserve any sympathy. For one thing, that irrational base is a Frankenstein monster of their own creation. Beyond that, everything I’ve seen indicates that members of the Republican elite still don’t get the basics of health reform — and that this lack of understanding is in the process of turning into a major political liability.
On the unstoppability of Obamacare: We have this system in which Congress passes laws, the president signs them, and then they go into effect. The Affordable Care Act went through this process, and there is no legitimate way for Republicans to stop it.
Is there an illegitimate way? Well, the G.O.P. can try blackmail, either by threatening to shut down the government or, an even more extreme tactic, threatening not to raise the debt limit, which would force the United States government into default and risk financial chaos. And Republicans did somewhat successfully blackmail President Obama back in 2011.
However, that was then. They faced a president on the ropes after a stinging defeat in the midterm election, not a president triumphantly re-elected. Furthermore, even in 2011 Mr. Obama wouldn’t give ground on the essentials of health care reform, the signature achievement of his presidency. There’s no way he would undermine the reform at this late date.
Republican leaders seem to get this, even if the base doesn’t. What they don’t seem to get, however, is the integral nature of the reform. So let me help out by explaining, one more time, why Obamacare looks the way it does.
Start with the goal that almost everyone at least pretends to support: giving Americans with pre-existing medical conditions access to health insurance. Governments can, if they choose, require that insurance companies issue policies without regard to an individual’s medical history, “community rating,” and some states, including New York, have done just that. But we know what happens next: many healthy people don’t buy insurance, leaving a relatively bad risk pool, leading to high premiums that drive out even more healthy people.
To avoid this downward spiral, you need to induce healthy Americans to buy in; hence, the individual mandate, with a penalty for those who don’t purchase insurance. Finally, since buying insurance could be a hardship for lower-income Americans, you need subsidies to make insurance affordable for all.
So there you have it: health reform is a three-legged stool resting on community rating, individual mandates and subsidies. It requires all three legs.
But wait — hasn’t the administration delayed the employer mandate, which requires that large firms provide insurance to their employees? Yes, it has, and Republicans are trying to make it sound as if the employer mandate and the individual mandate are comparable. Some of them even seem to think that they can bully Mr. Obama into delaying the individual mandate too. But the individual mandate is an essential piece of the reform, which can’t and won’t be bargained away, while the employer mandate is a fairly minor add-on that arguably shouldn’t have been in the law to begin with.
I guess that after all the years of vilification it was predictable that Republican leaders would still fail to understand the principles behind health reform and that this would hamper their ability to craft an effective political response as the reform’s implementation draws near. But their rudest shock is yet to come. You see, this thing isn’t going to be the often-predicted “train wreck.” On the contrary, it’s going to work.
Oh, there will be problems, especially in states where Republican governors and legislators are doing all they can to sabotage the implementation. But the basic thrust of Obamacare is, as I’ve just explained, coherent and even fairly simple. Moreover, all the early indications are that the law will, in fact, give millions of Americans who currently lack access to health insurance the coverage they need, while giving millions more a big break in their health care costs. And because so many people will see clear benefits, health reform will prove irreversible.
This achievement will represent a huge defeat for the conservative agenda of weakening the safety net. And Republicans who deluded their supporters into believing that none of this would happen will probably pay a large personal price. But as I said, they have nobody but themselves to blame.
Sunday, August 18, 2013
Conspiracies Right AND Left
Sunday, Aug 18, 2013 06:00 PM CDT
A nation of truthers
A new history of political paranoia argues that it's a fundmental part of our national character
By Laura Miller
“It was a paranoid time,” writes Jesse Walker after recounting an elaborate, half-forgotten conspiracy theory from the 19th century. (It involved “the Slave Power,” which some 19th-century Americans believed had been responsible for poisoning several officials who had in fact died of natural causes.) “In America,” he adds, “it is always a paranoid time.”
That’s the core argument of “The United States of Paranoia: A Conspiracy,” a new cultural history by the Reason magazine editor. Walker’s book is a riposte of sorts to the most famous treatment of America’s suspicious fantasies, Richard Hofstadter’s “The Paranoid Style in American Politics,” an essay first published in 1964 and oft cited since. Walker calls Hofstadter’s essay “flawed but fascinating,” and gives Hofstadter credit for the canny observation that the people who battle conspiracies have a tendency to form organizations and initiatives that eerily resemble those of their alleged foes. (Joe McCarthy, meet Joseph Stalin; you two guys have a lot in common.) But where Walker feels Hofstadter went wrong is in his assertion that “political paranoia is ‘the preferred style only of minority movements’” and that the style has “a greater affinity for bad causes than good.”
Au contraire, says Walker. “Educated elites have conspiracy theories, too” and the nation’s long history of “moral panics” illustrates the ways that “influential social institutions” — from the government to churches and political parties to the press — engage in paranoid thinking, sometimes with lethal results. “When I say virtually everyone is capable of paranoid thinking,” Walker writes, “I really do mean everyone, including you, me and the founding fathers … It is even possible to be paranoid about paranoids.” He then proceeds, in lively and often witty fashion, to prove it. Some of what Walker has to say will be familiar, but few readers are likely to get to the end of the book without having cherished notions challenged.
--------------------------------------------------------------------------------
“The United States of Paranoia” divides conspiracy theory into five modes: the Enemy Outside, the Enemy Within, the Enemy Below, the Enemy Above and the Benevolent Conspiracy. These categories tend to ooze into and around each other: For example, the early colonists’ belief that Native Americans worshiped or were in league with the Christian Devil fed the furor of the witch hunts of the late 1600s. (For a fuller development of how the crisis in Salem can be seen as a manifestation of anxieties about the Indian Wars, see Mary Beth Norton’s excellent “In the Devil’s Snare.”) Walker’s labels are mostly self-explanatory, but, FYI, the Benevolent Conspiracy encompasses notions of secret societies, like the Rosicrucians, or unseen forces, like wise aliens or angels, who supposedly guide or protect humanity.
Walker would need six more volumes to provide a comprehensive account of the various conspiratorial beliefs that have seized Americans, so don’t expect to find every crank hoedown accounted for here. Many of these beliefs, he notes, have some basis in fact: While it’s highly unlikely that “night doctors” prowled cities looking for African-Americans to kidnap, kill and dissect (a widespread rumor in early 20th-century black communities), it is certainly a documented truth that the white medical establishment, being a white-dominated establishment, had a cavalier attitude toward its black patients and sometimes used them inhumanly, as in the infamous Tuskegee Experiment of 1932 to 1972. White people really have conspired against blacks and lied about it, which is one reason why conspiracy theories about everything from AIDS to fast food have flourished in African-American communities.
Walker regards conspiracy theory as a form of “folklore,” because “it says something true about the anxieties and experiences of the people who believe and repeat it, even if it says nothing true about the objects of the theory itself.” Perhaps the rich and powerful are not giant space lizards in disguise (as noted kook David Icke — not covered in this book — claims), but they do seem to treat the rest of us with a coldblooded detachment that’s rather reptilian, so the theory has its resonance. For this and other reasons, Walker states, “I’m not out to espouse or debunk any particular conspiracy theories,” but rather to tease out what they reveal about our collective psyche.
This is a tricky brief because, as Walker himself admits, some conspiracy theories — such as the activities of the FBI’s COINTELPRO program to investigate “anti-American” groups in the 1960s and ’70s — are documented, while some of the undocumented ones are more credible than others. “It would be absurd,” he writes, “to deny that conspiracies can be real … The world is filled with plots both petty and grand, though never as enormous as the ancient cabals described in the most baroque conspiracy literature.”
This book is part Greatest Hits — you find discussions of JFK, Watergate, the Freemasons, birthers and the Satanic ritual abuse panic of the 1980s — and a few lesser-known oddities, like a subculture that believes “certain digital time displays, particularly 11:11, might be messages from another planet.” For the connoisseur, an entire chapter on John Todd — a lecturer on the evangelical circuit during the late 1970s who revealed the “secrets” of the vast witchcraft-practicing conspiracy from which he’d defected — offers a delicious farrago of crackpotiana. According to Todd, the Illuminati (aka the Council of Foreign Relations, aka the Rothschild family) had everything from Standard Oil to the John Birch Society to the ACLU dancing from its puppet strings. Ayn Rand, a mistress of Philippe Rothschild, wrote, at his order, a novel, “Atlas Shrugged,” which was mostly read by “Communists.” The Denny’s logo is really the symbol for the “eightfold path of what a witch must master to be a powerful witch” and Elton John “has never written a song that was not written in witch language.” Also, Todd claimed to have personally seen a copy of the Necronomicon — which was, in case you didn’t know, an inspiration for the Book of Mormon (the religious text, not the Broadway musical).
Some of the most piquant chapters in the book chronicle the many ironic conspiracies devised in the postwar years, from a faith called Discordianism, invented by a couple of high school pals who “shared an affection for crackpots, a distaste for religion and a fondness for pranks,” to Robert Anton Wilson’s “Illuminatus!” trilogy and the Church of the SubGenius, as well as other satirical or semi-satirical tricksters who frolicked through the pages of the alternative press during the 1960s and ’70s. These wags planted bogus “clues” and sent letters to outfits like the Christian Anti-Communist Crusade, professing to be the Illuminati and admitting, “We’ve taken over the Rock Music business.” The hoaxers were astonished at and chastened by how readily their mischief was incorporated into existing conspiracy theories. Even when the parodic element was recognized, believers viewed it as a clever disguise for the truth.
Then some of the pranksters succumbed to paranoid fantasies themselves, like Kerry Thornley, a co-inventor of Discordianism, and even, temporarily, Paul Krassner, editor of the counterculture magazine the Realist. Krassner, who called himself an “investigative satirist,” fell down a rabbit hole when his research into the Church of Scientology brought him under the sway of Mae Brussell, a California woman who thought virtually every aspect of modern life was part of a government plot to discredit the New Left. He started to think he was being followed and clicked out SOS messages on his ballpoint pen. “I had wanted to expose the dangers of Scientology,” the astute Krassner wrote later, “but instead I joined a cult of conspiracy … I thought that what I published was so important that I wanted to be persecuted, in order to validate the work. In the process, I became attached to conspiracy.”
As Walker sees it, our brains are predisposed to see patterns in random data and to apply stories to explain them, which is why conspiracy theory can be so contagious. Although conspiracies do exist, we need to be vigilant against our propensity to find them whether they are there or not. The most sensible outlook would appear to be that of Robert Anton Wilson, who concluded that “powerful people” could well be “engaged in criminal plots” but who found it unlikely that “the conspirators were capable of carrying out those plots competently.” Or, I would add, of covering them up effectively. It’s the ineptness of human beings in executing elaborate schemes and then shutting up about it afterward that makes me skeptical of almost all conspiracy theories. Besides, if the U.S. government was masterful enough to engineer the 9/11 attacks, why couldn’t it also plant some WMD in Iraq?
But there I go, again, debunking. The problem with trying to interpret conspiracy theories as folklore or mythic archetype, without weighing in on their real-world veracity, is that veracity is at the very core of conspiracy theory. Apart from wags like Wilson, the main thing its adherents most care about is the status of their beliefs as fact. Conspiracy theory is an argument about how the world really is. It can only tell us “something true about the anxieties and experiences of the people who believe and repeat it” if it is an invention of their minds and emotions instead. That I believe there’s a LensCrafters on the northeast corner of the intersection of Sixth Avenue and West 13th Street in Manhattan reveals little about my anxieties or experiences (beyond my experience of walking past it a lot) because there really is a LensCrafters there. That’s just a fact.
On the other hand, my beliefs about the racism and militancy of the American militia groups of the 1990s were, until I read “United States of Paranoia,” largely colored by what Walker believes to be the paranoid biases of law enforcement, government and the press. Facts can shift that. I didn’t know about the many connections between those groups and black nationalist and hip-hop figures who shared the militias’ opposition to “globalism, federal power and paramilitary policing.” I didn’t know that nearly half of the Branch Davidians killed in the Waco siege of 1993 were people of color. I didn’t know that authorities were alerted to the violent plans or racist doings of several right-wing fringe outfits by militias who found out about them and thought they ought to be stopped.
We’ll never stop quarreling over the blizzards of facts deployed by conspiracy theorists, but Walker has succeeded in proving at least one thing: We’ve all got a little paranoia in us. Even me.
A nation of truthers
A new history of political paranoia argues that it's a fundmental part of our national character
By Laura Miller
“It was a paranoid time,” writes Jesse Walker after recounting an elaborate, half-forgotten conspiracy theory from the 19th century. (It involved “the Slave Power,” which some 19th-century Americans believed had been responsible for poisoning several officials who had in fact died of natural causes.) “In America,” he adds, “it is always a paranoid time.”
That’s the core argument of “The United States of Paranoia: A Conspiracy,” a new cultural history by the Reason magazine editor. Walker’s book is a riposte of sorts to the most famous treatment of America’s suspicious fantasies, Richard Hofstadter’s “The Paranoid Style in American Politics,” an essay first published in 1964 and oft cited since. Walker calls Hofstadter’s essay “flawed but fascinating,” and gives Hofstadter credit for the canny observation that the people who battle conspiracies have a tendency to form organizations and initiatives that eerily resemble those of their alleged foes. (Joe McCarthy, meet Joseph Stalin; you two guys have a lot in common.) But where Walker feels Hofstadter went wrong is in his assertion that “political paranoia is ‘the preferred style only of minority movements’” and that the style has “a greater affinity for bad causes than good.”
Au contraire, says Walker. “Educated elites have conspiracy theories, too” and the nation’s long history of “moral panics” illustrates the ways that “influential social institutions” — from the government to churches and political parties to the press — engage in paranoid thinking, sometimes with lethal results. “When I say virtually everyone is capable of paranoid thinking,” Walker writes, “I really do mean everyone, including you, me and the founding fathers … It is even possible to be paranoid about paranoids.” He then proceeds, in lively and often witty fashion, to prove it. Some of what Walker has to say will be familiar, but few readers are likely to get to the end of the book without having cherished notions challenged.
--------------------------------------------------------------------------------
“The United States of Paranoia” divides conspiracy theory into five modes: the Enemy Outside, the Enemy Within, the Enemy Below, the Enemy Above and the Benevolent Conspiracy. These categories tend to ooze into and around each other: For example, the early colonists’ belief that Native Americans worshiped or were in league with the Christian Devil fed the furor of the witch hunts of the late 1600s. (For a fuller development of how the crisis in Salem can be seen as a manifestation of anxieties about the Indian Wars, see Mary Beth Norton’s excellent “In the Devil’s Snare.”) Walker’s labels are mostly self-explanatory, but, FYI, the Benevolent Conspiracy encompasses notions of secret societies, like the Rosicrucians, or unseen forces, like wise aliens or angels, who supposedly guide or protect humanity.
Walker would need six more volumes to provide a comprehensive account of the various conspiratorial beliefs that have seized Americans, so don’t expect to find every crank hoedown accounted for here. Many of these beliefs, he notes, have some basis in fact: While it’s highly unlikely that “night doctors” prowled cities looking for African-Americans to kidnap, kill and dissect (a widespread rumor in early 20th-century black communities), it is certainly a documented truth that the white medical establishment, being a white-dominated establishment, had a cavalier attitude toward its black patients and sometimes used them inhumanly, as in the infamous Tuskegee Experiment of 1932 to 1972. White people really have conspired against blacks and lied about it, which is one reason why conspiracy theories about everything from AIDS to fast food have flourished in African-American communities.
Walker regards conspiracy theory as a form of “folklore,” because “it says something true about the anxieties and experiences of the people who believe and repeat it, even if it says nothing true about the objects of the theory itself.” Perhaps the rich and powerful are not giant space lizards in disguise (as noted kook David Icke — not covered in this book — claims), but they do seem to treat the rest of us with a coldblooded detachment that’s rather reptilian, so the theory has its resonance. For this and other reasons, Walker states, “I’m not out to espouse or debunk any particular conspiracy theories,” but rather to tease out what they reveal about our collective psyche.
This is a tricky brief because, as Walker himself admits, some conspiracy theories — such as the activities of the FBI’s COINTELPRO program to investigate “anti-American” groups in the 1960s and ’70s — are documented, while some of the undocumented ones are more credible than others. “It would be absurd,” he writes, “to deny that conspiracies can be real … The world is filled with plots both petty and grand, though never as enormous as the ancient cabals described in the most baroque conspiracy literature.”
This book is part Greatest Hits — you find discussions of JFK, Watergate, the Freemasons, birthers and the Satanic ritual abuse panic of the 1980s — and a few lesser-known oddities, like a subculture that believes “certain digital time displays, particularly 11:11, might be messages from another planet.” For the connoisseur, an entire chapter on John Todd — a lecturer on the evangelical circuit during the late 1970s who revealed the “secrets” of the vast witchcraft-practicing conspiracy from which he’d defected — offers a delicious farrago of crackpotiana. According to Todd, the Illuminati (aka the Council of Foreign Relations, aka the Rothschild family) had everything from Standard Oil to the John Birch Society to the ACLU dancing from its puppet strings. Ayn Rand, a mistress of Philippe Rothschild, wrote, at his order, a novel, “Atlas Shrugged,” which was mostly read by “Communists.” The Denny’s logo is really the symbol for the “eightfold path of what a witch must master to be a powerful witch” and Elton John “has never written a song that was not written in witch language.” Also, Todd claimed to have personally seen a copy of the Necronomicon — which was, in case you didn’t know, an inspiration for the Book of Mormon (the religious text, not the Broadway musical).
Some of the most piquant chapters in the book chronicle the many ironic conspiracies devised in the postwar years, from a faith called Discordianism, invented by a couple of high school pals who “shared an affection for crackpots, a distaste for religion and a fondness for pranks,” to Robert Anton Wilson’s “Illuminatus!” trilogy and the Church of the SubGenius, as well as other satirical or semi-satirical tricksters who frolicked through the pages of the alternative press during the 1960s and ’70s. These wags planted bogus “clues” and sent letters to outfits like the Christian Anti-Communist Crusade, professing to be the Illuminati and admitting, “We’ve taken over the Rock Music business.” The hoaxers were astonished at and chastened by how readily their mischief was incorporated into existing conspiracy theories. Even when the parodic element was recognized, believers viewed it as a clever disguise for the truth.
Then some of the pranksters succumbed to paranoid fantasies themselves, like Kerry Thornley, a co-inventor of Discordianism, and even, temporarily, Paul Krassner, editor of the counterculture magazine the Realist. Krassner, who called himself an “investigative satirist,” fell down a rabbit hole when his research into the Church of Scientology brought him under the sway of Mae Brussell, a California woman who thought virtually every aspect of modern life was part of a government plot to discredit the New Left. He started to think he was being followed and clicked out SOS messages on his ballpoint pen. “I had wanted to expose the dangers of Scientology,” the astute Krassner wrote later, “but instead I joined a cult of conspiracy … I thought that what I published was so important that I wanted to be persecuted, in order to validate the work. In the process, I became attached to conspiracy.”
As Walker sees it, our brains are predisposed to see patterns in random data and to apply stories to explain them, which is why conspiracy theory can be so contagious. Although conspiracies do exist, we need to be vigilant against our propensity to find them whether they are there or not. The most sensible outlook would appear to be that of Robert Anton Wilson, who concluded that “powerful people” could well be “engaged in criminal plots” but who found it unlikely that “the conspirators were capable of carrying out those plots competently.” Or, I would add, of covering them up effectively. It’s the ineptness of human beings in executing elaborate schemes and then shutting up about it afterward that makes me skeptical of almost all conspiracy theories. Besides, if the U.S. government was masterful enough to engineer the 9/11 attacks, why couldn’t it also plant some WMD in Iraq?
But there I go, again, debunking. The problem with trying to interpret conspiracy theories as folklore or mythic archetype, without weighing in on their real-world veracity, is that veracity is at the very core of conspiracy theory. Apart from wags like Wilson, the main thing its adherents most care about is the status of their beliefs as fact. Conspiracy theory is an argument about how the world really is. It can only tell us “something true about the anxieties and experiences of the people who believe and repeat it” if it is an invention of their minds and emotions instead. That I believe there’s a LensCrafters on the northeast corner of the intersection of Sixth Avenue and West 13th Street in Manhattan reveals little about my anxieties or experiences (beyond my experience of walking past it a lot) because there really is a LensCrafters there. That’s just a fact.
On the other hand, my beliefs about the racism and militancy of the American militia groups of the 1990s were, until I read “United States of Paranoia,” largely colored by what Walker believes to be the paranoid biases of law enforcement, government and the press. Facts can shift that. I didn’t know about the many connections between those groups and black nationalist and hip-hop figures who shared the militias’ opposition to “globalism, federal power and paramilitary policing.” I didn’t know that nearly half of the Branch Davidians killed in the Waco siege of 1993 were people of color. I didn’t know that authorities were alerted to the violent plans or racist doings of several right-wing fringe outfits by militias who found out about them and thought they ought to be stopped.
We’ll never stop quarreling over the blizzards of facts deployed by conspiracy theorists, but Walker has succeeded in proving at least one thing: We’ve all got a little paranoia in us. Even me.
Coincidences
Coincidences have always fascinated me. . . FLH
Chasing Coincidences
Statistics: Why it’s hard to recognize the unlikely. dBy Amir D. Aczel
Whenever I fly, I like to talk to the person sitting next to me. Once in a while, I find that we know at least one person in common. If you are like me, perhaps coincidences such as this happen in your life as well.
The most unusual coincidence in my life took place when I flew from Boston, my home, to Chicago to meet Scott Isenberg, the new editor assigned to revise a statistics textbook I had authored a few years earlier. We were having dinner at a restaurant overlooking Lake Michigan, and Scott began to talk nostalgically about the orange groves that graced his neighborhood in a small town in California, where he grew up. I recalled that my wife, Debra, who is also from California, used to talk about orange groves as well. We both smiled and continued our conversation—after all, the state has 38 million people. But every remark he made about his childhood abode reminded me of something that my wife had told me. As we continued to notice more of these coincidences, I told him Debra’s name and he literally jumped out of his chair. It turned out that they had been friends in high school. You might think, what is the probability of such a rare event? It may be one in many millions.
The simple question might be “why do such unlikely coincidences occur in our lives?” But the real question is how to define the unlikely. You know that a situation is uncommon just from experience. But even the concept of “uncommon” assumes that like events in the category are common. How do we identify the other events to which we can compare this coincidence? If you can identify other events as likely, then you can calculate the mathematical probability of this particular event as exceptional.
The simple question might be “why do such unlikely coincidences occur in our lives?” But the real question is how to define the unlikely.
Probabilities are defined as relative measures in something called the “sample space,” which is the set of all possible outcomes of an experiment—such as drawing a card out of a well-shuffled deck, rolling a fair die, or spinning a roulette wheel. We generally assume that every elementary outcome of the experiment (any given card or any of the possible numbers, in the case of dice or roulette) has an equal likelihood, although the theory can handle sample spaces with varying likelihoods as well. If we can define a sample space in a real-world situation that may not involve a game of chance, then we can measure probabilities through this sample space.
In its essence, the idea of coincidences could be explained (somewhat simplistically) using a deck of cards. Drawing the ace of spades out of a well-shuffled deck of 52 cards is a relatively rare event: Its probability is only 1 in 52. We compute it using the mathematical rule that divides the size of the event, one card (if we’re talking about drawing any ace, this would be a size of four), by the size of the sample space for drawing a card out of a deck, which is 52, the total number of cards.
But if every day of your life you draw a card out of a deck, you can be sure to see the ace of spades sometimes. In fact, you expect this to happen roughly once in 52 draws. It is the fact that cards can be drawn repeatedly out of a deck (with reshuffling after every draw) that makes rare events show up.
This is essentially what happens in our lives. We are exposed to possible events all the time: some of them probable, but many of them highly improbable. Each rare event—by itself—is unlikely. But by the mere act of living, we constantly draw cards out of decks. Because something must happen when a card is drawn, so to speak, the highly improbable does appear from time to time.
If every day of your life you draw a card out of a deck, you can be sure to see the ace of spades sometimes.
It is the repetitiveness of the experiment that makes the improbable take place. The catch is that you can’t tell beforehand which of a very large set of improbable events will transpire. The fact that one out of many possible rare outcomes does happen should not surprise us because of the number of possibilities for extraordinary events to occur. The probabilities of these singly unlikely happenings compound statistically, so that the chance of at least one of many highly improbable events occurring becomes quite high.
So if Scott and Debra had not been friends in high school, I could have found out at some point in my life that my father—rather than my wife—was the friend of the father of the person sitting next to me on a transatlantic flight. Or that my sister took piano lessons from the mother of my new neighbor who’d just moved here from another state. All of these are rare events, but we are exposed to so many possibilities for them to occur that, even though they are rare, some of them have to happen.
Such an event has a tiny probability of occurring only if we specify beforehand that this is what will happen. If I went to Chicago expecting Scott to know my wife, its occurrence would be an event of fantastically small probability. Within the possible occurrence of millions of other coincidences in my life, it shouldn’t shock me that I did once observe a very unlikely coincidence.
Coincidences and their analysis have led to important academic research in all areas where probability plays a role. Persi Diaconis, professor of statistics at Stanford University, describes extremely unlikely coincidences as embodying the “blade of grass paradox.” If you were to stand in a meadow and reach down to touch a blade of grass, there are millions of grass blades that you might touch. But you will, in fact, touch one of them. The a priori fact that the blade you touch will be any particular one has an extremely tiny probability, but such an occurrence must take place if you are going to touch a blade of grass.
Mathematically, the sample space (in this case, a field of grass) is made up of many elementary outcomes, which are the particulars of a sample space—a single card, in the card-drawing example, or a blade of grass in Diaconis’ paradox. Elementary outcomes can then be classed into larger events. Drawing an ace is the event made up of the four aces, so the event has four possibilities out of the sample space of 52. While the relative size of each event determines its probability, philosophically we may look at an experiment as being made up of many elementary outcomes, all of them equally likely.
If you were to stand in a meadow and reach down to touch a blade of grass, there are millions of grass blades that you might touch. But you will, in fact, touch one of them.
This means that we assume that any card has just as good a chance of being picked as any other, and so does every blade of grass in a meadow. Thus the knowledge that one elementary outcome must happen, should make us realize that the unlikely and the likely both can take place. It’s a matter of frequency. Events that contain many elementary outcomes are more likely than those with few of them: Drawing any ace is four times as likely as drawing the ace of clubs.
The devil is in the details of how we interpret what we see in life. And here, psychology—more so than mathematics or logic—plays a key role. We tend to remember coincidences such as the one I experienced with my editor Scott and conveniently forget the thousands of times we may have met someone and had a conversation finding absolutely nothing in common. We remember the time we rushed through security, ran to the plane, and made it just in time for the airplane’s door to shut behind us, and forget all the times we sat waiting for hours in airport terminals. And we also seem to be hardwired to exaggerate the chance events in our lives—because they provide us with good cocktail party stories. Psychological factors can well mask the probabilistic reality. All these factors, mathematical, interpretational, and psychological, affect how we view and understand the rare events in our personal lives.
We also need to identify the correct sample space, and there is no obvious, unique way of doing this. In probability theory, we usually assume that every elementary outcome is equally likely. So whom do we include in such an analysis when trying to understand a coincidence such as mine? Would it be all Americans? All Americans within certain professions? All Americans within certain socioeconomic classes? In the case of a coincidence on a flight, you could exclude all Americans who don’t fly, or don’t fly often—but here, the coincidence did not include people who fly (it was only I who flew). Since there may be no “correct” way to identify a sample space in many cases involving rare events, the occurrence of such startling coincidences in everyday life may well remain a mystery.
Amir D. Aczel is author of a dozen nonfiction books on the subjects of science and mathematics, most of which have appeared on various bestseller lists in the United States and abroad. He has appeared on more than 50 television programs and has published science articles in Scientific American, The New York Times, and others. He is a Guggenheim Fellow and a research fellow in the history of science at Boston University.
Chasing Coincidences
Statistics: Why it’s hard to recognize the unlikely. dBy Amir D. Aczel
Whenever I fly, I like to talk to the person sitting next to me. Once in a while, I find that we know at least one person in common. If you are like me, perhaps coincidences such as this happen in your life as well.
The most unusual coincidence in my life took place when I flew from Boston, my home, to Chicago to meet Scott Isenberg, the new editor assigned to revise a statistics textbook I had authored a few years earlier. We were having dinner at a restaurant overlooking Lake Michigan, and Scott began to talk nostalgically about the orange groves that graced his neighborhood in a small town in California, where he grew up. I recalled that my wife, Debra, who is also from California, used to talk about orange groves as well. We both smiled and continued our conversation—after all, the state has 38 million people. But every remark he made about his childhood abode reminded me of something that my wife had told me. As we continued to notice more of these coincidences, I told him Debra’s name and he literally jumped out of his chair. It turned out that they had been friends in high school. You might think, what is the probability of such a rare event? It may be one in many millions.
The simple question might be “why do such unlikely coincidences occur in our lives?” But the real question is how to define the unlikely. You know that a situation is uncommon just from experience. But even the concept of “uncommon” assumes that like events in the category are common. How do we identify the other events to which we can compare this coincidence? If you can identify other events as likely, then you can calculate the mathematical probability of this particular event as exceptional.
The simple question might be “why do such unlikely coincidences occur in our lives?” But the real question is how to define the unlikely.
Probabilities are defined as relative measures in something called the “sample space,” which is the set of all possible outcomes of an experiment—such as drawing a card out of a well-shuffled deck, rolling a fair die, or spinning a roulette wheel. We generally assume that every elementary outcome of the experiment (any given card or any of the possible numbers, in the case of dice or roulette) has an equal likelihood, although the theory can handle sample spaces with varying likelihoods as well. If we can define a sample space in a real-world situation that may not involve a game of chance, then we can measure probabilities through this sample space.
In its essence, the idea of coincidences could be explained (somewhat simplistically) using a deck of cards. Drawing the ace of spades out of a well-shuffled deck of 52 cards is a relatively rare event: Its probability is only 1 in 52. We compute it using the mathematical rule that divides the size of the event, one card (if we’re talking about drawing any ace, this would be a size of four), by the size of the sample space for drawing a card out of a deck, which is 52, the total number of cards.
But if every day of your life you draw a card out of a deck, you can be sure to see the ace of spades sometimes. In fact, you expect this to happen roughly once in 52 draws. It is the fact that cards can be drawn repeatedly out of a deck (with reshuffling after every draw) that makes rare events show up.
This is essentially what happens in our lives. We are exposed to possible events all the time: some of them probable, but many of them highly improbable. Each rare event—by itself—is unlikely. But by the mere act of living, we constantly draw cards out of decks. Because something must happen when a card is drawn, so to speak, the highly improbable does appear from time to time.
If every day of your life you draw a card out of a deck, you can be sure to see the ace of spades sometimes.
It is the repetitiveness of the experiment that makes the improbable take place. The catch is that you can’t tell beforehand which of a very large set of improbable events will transpire. The fact that one out of many possible rare outcomes does happen should not surprise us because of the number of possibilities for extraordinary events to occur. The probabilities of these singly unlikely happenings compound statistically, so that the chance of at least one of many highly improbable events occurring becomes quite high.
So if Scott and Debra had not been friends in high school, I could have found out at some point in my life that my father—rather than my wife—was the friend of the father of the person sitting next to me on a transatlantic flight. Or that my sister took piano lessons from the mother of my new neighbor who’d just moved here from another state. All of these are rare events, but we are exposed to so many possibilities for them to occur that, even though they are rare, some of them have to happen.
Such an event has a tiny probability of occurring only if we specify beforehand that this is what will happen. If I went to Chicago expecting Scott to know my wife, its occurrence would be an event of fantastically small probability. Within the possible occurrence of millions of other coincidences in my life, it shouldn’t shock me that I did once observe a very unlikely coincidence.
Coincidences and their analysis have led to important academic research in all areas where probability plays a role. Persi Diaconis, professor of statistics at Stanford University, describes extremely unlikely coincidences as embodying the “blade of grass paradox.” If you were to stand in a meadow and reach down to touch a blade of grass, there are millions of grass blades that you might touch. But you will, in fact, touch one of them. The a priori fact that the blade you touch will be any particular one has an extremely tiny probability, but such an occurrence must take place if you are going to touch a blade of grass.
Mathematically, the sample space (in this case, a field of grass) is made up of many elementary outcomes, which are the particulars of a sample space—a single card, in the card-drawing example, or a blade of grass in Diaconis’ paradox. Elementary outcomes can then be classed into larger events. Drawing an ace is the event made up of the four aces, so the event has four possibilities out of the sample space of 52. While the relative size of each event determines its probability, philosophically we may look at an experiment as being made up of many elementary outcomes, all of them equally likely.
If you were to stand in a meadow and reach down to touch a blade of grass, there are millions of grass blades that you might touch. But you will, in fact, touch one of them.
This means that we assume that any card has just as good a chance of being picked as any other, and so does every blade of grass in a meadow. Thus the knowledge that one elementary outcome must happen, should make us realize that the unlikely and the likely both can take place. It’s a matter of frequency. Events that contain many elementary outcomes are more likely than those with few of them: Drawing any ace is four times as likely as drawing the ace of clubs.
The devil is in the details of how we interpret what we see in life. And here, psychology—more so than mathematics or logic—plays a key role. We tend to remember coincidences such as the one I experienced with my editor Scott and conveniently forget the thousands of times we may have met someone and had a conversation finding absolutely nothing in common. We remember the time we rushed through security, ran to the plane, and made it just in time for the airplane’s door to shut behind us, and forget all the times we sat waiting for hours in airport terminals. And we also seem to be hardwired to exaggerate the chance events in our lives—because they provide us with good cocktail party stories. Psychological factors can well mask the probabilistic reality. All these factors, mathematical, interpretational, and psychological, affect how we view and understand the rare events in our personal lives.
We also need to identify the correct sample space, and there is no obvious, unique way of doing this. In probability theory, we usually assume that every elementary outcome is equally likely. So whom do we include in such an analysis when trying to understand a coincidence such as mine? Would it be all Americans? All Americans within certain professions? All Americans within certain socioeconomic classes? In the case of a coincidence on a flight, you could exclude all Americans who don’t fly, or don’t fly often—but here, the coincidence did not include people who fly (it was only I who flew). Since there may be no “correct” way to identify a sample space in many cases involving rare events, the occurrence of such startling coincidences in everyday life may well remain a mystery.
Amir D. Aczel is author of a dozen nonfiction books on the subjects of science and mathematics, most of which have appeared on various bestseller lists in the United States and abroad. He has appeared on more than 50 television programs and has published science articles in Scientific American, The New York Times, and others. He is a Guggenheim Fellow and a research fellow in the history of science at Boston University.
On Big Data
Is Big Data an Economic Big Dud?By JAMES GLANZ
Published: August 17, 2013
Reprints
IF pencil marks on some colossal doorjamb could measure the growth of the Internet, they would probably be tracking the amount of data sloshing through the public network that spans the planet. Christened by the World Economic Forum as “the new oil” and “a new asset class,” these vast loads of data have been likened to transformative innovations like the steam locomotive, electricity grids, steel, air-conditioning and the radio.
Graphic Surf’s Up.The astounding rate of growth would make any parent proud. There were 30 billion gigabytes of video, e-mails, Web transactions and business-to-business analytics in 2005. The total is expected to reach more than 20 times that figure in 2013, with off-the-charts increases to follow in the years ahead, according to Cisco, the networking giant.
How much data is that? Cisco estimates that in 2012, some two trillion minutes of video alone traversed the Internet every month. That translates to over a million years per week of everything from video selfies and nannycams to Netflix downloads and “Battlestar Galactica” episodes.
What is sometimes referred to as the Internet’s first wave — say, from the 1990s until around 2005 — brought completely new services like e-mail, the Web, online search and eventually broadband. For its next act, the industry has pinned its hopes, and its colossal public relations machine, on the power of Big Data itself to supercharge the economy.
There is just one tiny problem: the economy is, at best, in the doldrums and has stayed there during the latest surge in Web traffic. The rate of productivity growth, whose steady rise from the 1970s well into the 2000s has been credited to earlier phases in the computer and Internet revolutions, has actually fallen. The overall economic trends are complex, but an argument could be made that the slowdown began around 2005 — just when Big Data began to make its appearance.
Those factors have some economists questioning whether Big Data will ever have the impact of the first Internet wave, let alone the industrial revolutions of past centuries. One theory holds that the Big Data industry is thriving more by cannibalizing existing businesses in the competition for customers than by creating fundamentally new opportunities.
In some cases, online companies like Amazon and eBay are fighting among themselves for customers. But in others — here is where the cannibals enter — the companies are eating up traditional advertising, media, music and retailing businesses, said Joel Waldfogel, an economist at the University of Minnesota who has studied the phenomenon.
“One falls, one rises — it’s pretty clear the digital kind is a substitute to the physical kind,” he said. “So it would be crazy to count the whole rise in digital as a net addition to the economy.”
Robert J. Gordon, a professor of economics at Northwestern University, said comparing Big Data to oil was promotional nonsense. “Gasoline made from oil made possible a transportation revolution as cars replaced horses and as commercial air transportation replaced railroads,” he said. “If anybody thinks that personal data are comparable to real oil and real vehicles, they don’t appreciate the realities of the last century.”
Other economists believe that Big Data’s economic punch is just a few years away, as engineers trained in data manipulation make their way through college and as data-driven start-ups begin hiring. And of course the recession could be masking the impact of the data revolution in ways economists don’t yet grasp. Still, some suspect that in the end our current framework for understanding Big Data and “the cloud” could be a mirage.
“I think it’s conceivable that the data era will be a bust for the things people expect it to be useful for,” said Scott Wallsten, a senior fellow at the Technology Policy Institute and the Georgetown Center for Business and Public Policy. Some entirely new use will have to turn up for data to fulfill its economic potential, he added.
There is no disputing that a wide spectrum of businesses, from e-marketers to pharmaceutical companies, are now using huge amounts of data as part of their everyday business.
Josh Marks is the chief executive of one such company, masFlight, which helps airlines use enormous data sets to reduce fuel consumption and improve overall performance. Although his first mission is to help clients compete with other airlines for customers, Mr. Marks believes that efficiencies like those his company is chasing should eventually expand the global economy.
For now, though, he acknowledges that most of the raw data flowing across the Web has limited economic value: far more useful is specialized data in the hands of analysts with a deep understanding of specific industries. “The promises that are made around the ability to manipulate these very large data sets in real time are overselling what they can do today,” Mr. Marks said.
Some economists argue that it is often difficult to estimate the true value of new technologies, and that Big Data may already be delivering benefits that are uncounted in official economic statistics. Cat videos and television programs on Hulu, for example, produce pleasure for Web surfers — so shouldn’t economists find a way to value such intangible activity, whether or not it moves the needle of the gross domestic product?
In addition, infrastructure investments often take years to pay off in a big way, said Shane Greenstein, an economist at Northwestern University. He cited high-speed Internet connections laid down in the late 1990s that have driven profits only recently. But he noted that in contrast to the Internet’s first wave, which created services like the Web and e-mail, the impact of the second wave — the Big Data revolution — is harder to discern above the noise of broader economic activity.
“It could be just time delay, or it could be that the value just isn’t there,” said Mr. Greenstein, who has studied the competitive success of online businesses in media, advertising and retailing.
Perhaps surprisingly, the parallel most tightly embraced by digital futurists — the rise of the electricity grid — is largely dismissed by those who have studied the history of the subject. The idea is that a ubiquitous Internet will make data and “cloud” computing available anywhere, like electricity through a socket.
The numerical comparisons are tantalizing. As illustrated in “The Electric City,” by Harold L. Platt, the booming quantity and adoption rates of electricity flowing on the Chicago grid in the late 19th and early 20th centuries instantly bring to mind those charts showing data growth today.
Despite those similarities, Mr. Platt, a professor emeritus of history at Loyola University Chicago, said it was unlikely that the revolutions unleashed in manufacturing, domestic life, transportation and high and low society by electricity could ever be matched by the data era. “I’d be hard pressed to quickly draw comparisons,” he said.
But even as Mr. Platt, 68, spoke by cellphone from Chicago, fragments of today’s inescapable data flood found him as he received messages from his grown children. “I have to text them or else they won’t answer me back,” Mr. Platt said gamely. “I’m going with the flow.”
James Glanz is an investigative reporter for The New York Times.
Saturday, August 17, 2013
Thurston Clarke - JFK's Last 100 Days
In the coming days there promises to be a plethora of books on President John F. Kennedy as we move toward the 50th anniversary of his assassination on November 22, 1963. I thoroughly enjoyed this account of his last 100 days. This book was a genuine pleasure to read. I will be adding more to this post in the next few days.
This book is clearly admiring of JFK, but it is not total hagiography. The author is clear about Kennedy's philandering and his mendacity regarding his healthy. The general theme is that JFK was getting his act together in his last days before he died and moving the country forward. It is pleasant to think this was so.
Makes us wish more than ever that we knew what would have happened in the country and the world if he had lived.
The author makes it clear that JFK had little use for Lyndon Johnson. Would he have dumped LBJ from the ticket in 1964? We will never know, of course, for the available evidence is mixed.
The book begins in August of 1963 with the death of infant Patrick, the Kennedy's second born son. Patrick died shortly after being born prematurely. This tragedy hung over both Jack and Jackie in the President's last days.
One thing that stuck me reading this book is that it seemed that JFK left DC every weekend. Did the President of the United States never work on weekends back then? :)
The biggest takeaway is that the author thinks that had JFK lived, the '64 civil rights would still have become law. We will never know, but from I know, LBJ got that bill pushed thru Congress. I have my doubts that it would have passed if JFK were still the President. Ditto the voting rights act and Medicare. It's chilling to think that it might have took Johnson becoming President under the worst of circumstances to achieve these landmark progressive measures.
The book cites many evidences of people warning JFK not to go to Dallas. The President didn't want to go, but felt he had to in order to breach the schism in Lonestar politics between Sen. Yarborough and Gov. Connally. Good politics dictated a trip to Texas. He was even warned to visit Texas yes, but avoid Dallas, but to no avail. Circumstances could have precluded a travel route that avoided driving slowly by the infamous Texas Schoolbook Depository but it didn't happen. History turned on a particular motorcade travel route.
JFK was perhaps the most intellectually curious President we've had. Only TR comes to mind to rival him in this regard. The author stresses that Kennedy could get bored quickly and would doodle in meetings when he became bored and his mind wandered.
This President was greatly influenced by history. He was always mindful of what his place in history might be. JFK was perhaps our most historically minded President. Only Jefferson comes to mind to rival him in this regard.
This President's most enduring legacy is the Cuban missile crisis of 1962. This President guided us thru that harrowing crisis that really could have led to nuclear war. If based on nothing else, JFK deserves a high place in world history for his handling of this unique moment in world events. Ditto the Berlin crisis in 1961.
Indeed, JFK came into office seeking to lower tensions between the US and the USSR and further cause of world peace. He achieved his aim with the nuclear test ban treaty, an unmitigated success.
The Republican Con Game
Republicans Against RealityBy PAUL KRUGMAN
Published: August 4, 2013 1099 Comments
Last week House Republicans voted for the 40th time to repeal Obamacare. Like the previous 39 votes, this action will have no effect whatsoever. But it was a stand-in for what Republicans really want to do: repeal reality, and the laws of arithmetic in particular. The sad truth is that the modern G.O.P. is lost in fantasy, unable to participate in actual governing.
Just to be clear, I’m not talking about policy substance. I may believe that Republicans have their priorities all wrong, but that’s not the issue here. Instead, I’m talking about their apparent inability to accept very basic reality constraints, like the fact that you can’t cut overall spending without cutting spending on particular programs, or the fact that voting to repeal legislation doesn’t change the law when the other party controls the Senate and the White House.
Am I exaggerating? Consider what went down in Congress last week.
First, House leaders had to cancel planned voting on a transportation bill, because not enough representatives were willing to vote for the bill’s steep spending cuts. Now, just a few months ago House Republicans approved an extreme austerity budget, mandating severe overall cuts in federal spending — and each specific bill will have to involve large cuts in order to meet that target. But it turned out that a significant number of representatives, while willing to vote for huge spending cuts as long as there weren’t any specifics, balked at the details. Don’t cut you, don’t cut me, cut that fellow behind the tree.
Then House leaders announced plans to hold a vote on doubling the amount of cuts from the food stamp program — a demand that is likely to sink the already struggling effort to agree with the Senate on a farm bill.
Then they held the pointless vote on Obamacare, apparently just to make themselves feel better. (It’s curious how comforting they find the idea of denying health care to millions of Americans.) And then they went home for recess, even though the end of the fiscal year is looming and hardly any of the legislation needed to run the federal government has passed.
In other words, Republicans, confronted with the responsibilities of governing, essentially threw a tantrum, then ran off to sulk.
How did the G.O.P. get to this point? On budget issues, the proximate source of the party’s troubles lies in the decision to turn the formulation of fiscal policy over to a con man. Representative Paul Ryan, the chairman of the House Budget Committee, has always been a magic-asterisk kind of guy — someone who makes big claims about having a plan to slash deficits but refuses to spell out any of the all-important details. Back in 2011 the Congressional Budget Office, in evaluating one of Mr. Ryan’s plans, came close to open sarcasm; it described the extreme spending cuts Mr. Ryan was assuming, then remarked, tersely, “No proposals were specified that would generate that path.”
What’s happening now is that the G.O.P. is trying to convert Mr. Ryan’s big talk into actual legislation — and is finding, unsurprisingly, that it can’t be done. Yet Republicans aren’t willing to face up to that reality. Instead, they’re just running away.
When it comes to fiscal policy, then, Republicans have fallen victim to their own con game. And I would argue that something similar explains how the party lost its way, not just on fiscal policy, but on everything.
Think of it this way: For a long time the Republican establishment got its way by playing a con game with the party’s base. Voters would be mobilized as soldiers in an ideological crusade, fired up by warnings that liberals were going to turn the country over to gay married terrorists, not to mention taking your hard-earned dollars and giving them to Those People. Then, once the election was over, the establishment would get on with its real priorities — deregulation and lower taxes on the wealthy.
At this point, however, the establishment has lost control. Meanwhile, base voters actually believe the stories they were told — for example, that the government is spending vast sums on things that are a complete waste or at any rate don’t do anything for people like them. (Don’t let the government get its hands on Medicare!) And the party establishment can’t get the base to accept fiscal or political reality without, in effect, admitting to those base voters that they were lied to.
The result is what we see now in the House: a party that, as I said, seems unable to participate in even the most basic processes of governing.
What makes this frightening is that Republicans do, in fact, have a majority in the House, so America can’t be governed at all unless a sufficient number of those House Republicans are willing to face reality. And that quorum of reasonable Republicans may not exist.
This article has been revised to reflect the following correction:
Correction: August 9, 2013
An earlier version of this column misstated the House Republican plan on food stamps. It will double the amount of planned cuts, not halve the benefits.
My Talking Points
My talking points for today. 1) At least I don't live in Texas. 2) It's not a good time to be walking like an Egyptian. 3) Call me Speedoo but my real name is Mr. Earl. 4) All the gold in California is in a bank in the middle of Beverley Hills in somebody else's name. 5) Everything's made for love.
I've talked enough. It's somebody else's turn now.
Like · · Promote · Share.
Deborah Davis Weeks, Jane Moore Patton and 2 others like this..Diane Bystrom You must have had that cup of joe you were talking about.about an hour ago · Unlike · 1..Pat Abernethy Murphree Yep, you people in the SE are flooding and we had a tiny bit of rain for the first time in months here in TX!56 minutes ago · Like..Fred Hudson As long as the good Lord's willin and the creeks don't rise we'll be fine in the SE. As for Texas, well, I guess Gov. Perry and the legislature has everything under control.
I've talked enough. It's somebody else's turn now.
Like · · Promote · Share.
Deborah Davis Weeks, Jane Moore Patton and 2 others like this..Diane Bystrom You must have had that cup of joe you were talking about.about an hour ago · Unlike · 1..Pat Abernethy Murphree Yep, you people in the SE are flooding and we had a tiny bit of rain for the first time in months here in TX!56 minutes ago · Like..Fred Hudson As long as the good Lord's willin and the creeks don't rise we'll be fine in the SE. As for Texas, well, I guess Gov. Perry and the legislature has everything under control.
Learned Ignorance
In this era of open sources where anybody with a computer can keep informed if he desires, it is amazing how much ignorance there is, and we all know that 99% of the ignorance is Republican.
Moment of TruthinessBy PAUL KRUGMAN
Published: August 15, 2013 608 Comments
We all know how democracy is supposed to work. Politicians are supposed to campaign on the issues, and an informed public is supposed to cast its votes based on those issues, with some allowance for the politicians’ perceived character and competence.
We also all know that the reality falls far short of the ideal. Voters are often misinformed, and politicians aren’t reliably truthful. Still, we like to imagine that voters generally get it right in the end, and that politicians are eventually held accountable for what they do.
But is even this modified, more realistic vision of democracy in action still relevant? Or has our political system been so degraded by misinformation and disinformation that it can no longer function?
Well, consider the case of the budget deficit — an issue that dominated Washington discussion for almost three years, although it has recently receded.
You probably won’t be surprised to hear that voters are poorly informed about the deficit. But you may be surprised by just how misinformed.
In a well-known paper with a discouraging title, “It Feels Like We’re Thinking,” the political scientists Christopher Achen and Larry Bartels reported on a 1996 survey that asked voters whether the budget deficit had increased or decreased under President Clinton. In fact, the deficit was down sharply, but a plurality of voters — and a majority of Republicans — believed that it had gone up.
I wondered on my blog what a similar survey would show today, with the deficit falling even faster than it did in the 1990s. Ask and ye shall receive: Hal Varian, the chief economist of Google, offered to run a Google Consumer Survey — a service the company normally sells to market researchers — on the question. So we asked whether the deficit has gone up or down since January 2010. And the results were even worse than in 1996: A majority of those who replied said the deficit has gone up, with more than 40 percent saying that it has gone up a lot. Only 12 percent answered correctly that it has gone down a lot.
Am I saying that voters are stupid? Not at all. People have lives, jobs, children to raise. They’re not going to sit down with Congressional Budget Office reports. Instead, they rely on what they hear from authority figures. The problem is that much of what they hear is misleading if not outright false.
The outright falsehoods, you won’t be surprised to learn, tend to be politically motivated. In those 1996 data, Republicans were much more likely than Democrats to hold false views about the deficit, and the same must surely be true today. After all, Republicans made a lot of political hay over a supposedly runaway deficit early in the Obama administration, and they have maintained the same rhetoric even as the deficit has plunged. Thus Eric Cantor, the second-ranking Republican in the House, declared on Fox News that we have a “growing deficit,” while Senator Rand Paul told Bloomberg Businessweek that we’re running “a trillion-dollar deficit every year.”
Do people like Mr. Cantor or Mr. Paul know that what they’re saying isn’t true? Do they care? Probably not. In Stephen Colbert’s famous formulation, claims about runaway deficits may not be true, but they have truthiness, and that’s all that matters.
Still, aren’t there umpires for this sort of thing — trusted, nonpartisan authorities who can and will call out purveyors of falsehood? Once upon a time, I think, there were. But these days the partisan divide runs very deep, and even those who try to play umpire seem afraid to call out falsehood. Incredibly, the fact-checking site PolitiFact rated Mr. Cantor’s flatly false statement as “half true.”
Now, Washington still does have some “wise men,” people who are treated with special deference by the news media. But when it comes to the issue of the deficit, the supposed wise men turn out to be part of the problem. People like Alan Simpson and Erskine Bowles, the co-chairmen of President Obama’s deficit commission, did a lot to feed public anxiety about the deficit when it was high. Their report was ominously titled “The Moment of Truth.” So have they changed their tune as the deficit has come down? No — so it’s no surprise that the narrative of runaway deficits remains even though the budget reality has completely changed.
Put it all together, and it’s a discouraging picture. We have an ill-informed or misinformed electorate, politicians who gleefully add to the misinformation and watchdogs who are afraid to bark. And to the extent that there are widely respected, not-too-partisan players, they seem to be fostering, not fixing, the public’s false impressions.
So what should we be doing? Keep pounding away at the truth, I guess, and hope it breaks through. But it’s hard not to wonder how this system is supposed to work.
Moment of TruthinessBy PAUL KRUGMAN
Published: August 15, 2013 608 Comments
We all know how democracy is supposed to work. Politicians are supposed to campaign on the issues, and an informed public is supposed to cast its votes based on those issues, with some allowance for the politicians’ perceived character and competence.
We also all know that the reality falls far short of the ideal. Voters are often misinformed, and politicians aren’t reliably truthful. Still, we like to imagine that voters generally get it right in the end, and that politicians are eventually held accountable for what they do.
But is even this modified, more realistic vision of democracy in action still relevant? Or has our political system been so degraded by misinformation and disinformation that it can no longer function?
Well, consider the case of the budget deficit — an issue that dominated Washington discussion for almost three years, although it has recently receded.
You probably won’t be surprised to hear that voters are poorly informed about the deficit. But you may be surprised by just how misinformed.
In a well-known paper with a discouraging title, “It Feels Like We’re Thinking,” the political scientists Christopher Achen and Larry Bartels reported on a 1996 survey that asked voters whether the budget deficit had increased or decreased under President Clinton. In fact, the deficit was down sharply, but a plurality of voters — and a majority of Republicans — believed that it had gone up.
I wondered on my blog what a similar survey would show today, with the deficit falling even faster than it did in the 1990s. Ask and ye shall receive: Hal Varian, the chief economist of Google, offered to run a Google Consumer Survey — a service the company normally sells to market researchers — on the question. So we asked whether the deficit has gone up or down since January 2010. And the results were even worse than in 1996: A majority of those who replied said the deficit has gone up, with more than 40 percent saying that it has gone up a lot. Only 12 percent answered correctly that it has gone down a lot.
Am I saying that voters are stupid? Not at all. People have lives, jobs, children to raise. They’re not going to sit down with Congressional Budget Office reports. Instead, they rely on what they hear from authority figures. The problem is that much of what they hear is misleading if not outright false.
The outright falsehoods, you won’t be surprised to learn, tend to be politically motivated. In those 1996 data, Republicans were much more likely than Democrats to hold false views about the deficit, and the same must surely be true today. After all, Republicans made a lot of political hay over a supposedly runaway deficit early in the Obama administration, and they have maintained the same rhetoric even as the deficit has plunged. Thus Eric Cantor, the second-ranking Republican in the House, declared on Fox News that we have a “growing deficit,” while Senator Rand Paul told Bloomberg Businessweek that we’re running “a trillion-dollar deficit every year.”
Do people like Mr. Cantor or Mr. Paul know that what they’re saying isn’t true? Do they care? Probably not. In Stephen Colbert’s famous formulation, claims about runaway deficits may not be true, but they have truthiness, and that’s all that matters.
Still, aren’t there umpires for this sort of thing — trusted, nonpartisan authorities who can and will call out purveyors of falsehood? Once upon a time, I think, there were. But these days the partisan divide runs very deep, and even those who try to play umpire seem afraid to call out falsehood. Incredibly, the fact-checking site PolitiFact rated Mr. Cantor’s flatly false statement as “half true.”
Now, Washington still does have some “wise men,” people who are treated with special deference by the news media. But when it comes to the issue of the deficit, the supposed wise men turn out to be part of the problem. People like Alan Simpson and Erskine Bowles, the co-chairmen of President Obama’s deficit commission, did a lot to feed public anxiety about the deficit when it was high. Their report was ominously titled “The Moment of Truth.” So have they changed their tune as the deficit has come down? No — so it’s no surprise that the narrative of runaway deficits remains even though the budget reality has completely changed.
Put it all together, and it’s a discouraging picture. We have an ill-informed or misinformed electorate, politicians who gleefully add to the misinformation and watchdogs who are afraid to bark. And to the extent that there are widely respected, not-too-partisan players, they seem to be fostering, not fixing, the public’s false impressions.
So what should we be doing? Keep pounding away at the truth, I guess, and hope it breaks through. But it’s hard not to wonder how this system is supposed to work.
Thursday, August 15, 2013
The Ideal English Major
Comments (376)July 29, 2013
The Ideal English Major
Melinda Beck for The Chronicle Review
Enlarge Image
Melinda Beck for The Chronicle Review
By Mark Edmundson
Soon college students all over America will be trundling to their advisers' offices to choose a major. In this moment of financial insecurity, students are naturally drawn to economics, business, and the hard sciences. But students ought to resist the temptation of those purportedly money-ensuring options and even of history and philosophy, marvelous though they may be. All students—and I mean all—ought to think seriously about majoring in English. Becoming an English major means pursuing the most important subject of all—being a human being.
An English major is much more than 32 or 36 credits including a course in Shakespeare, a course on writing before 1800, and a three-part survey of English and American lit. That's the outer form of the endeavor. It's what's inside that matters. It's the character-forming—or (dare I say?) soul-making—dimension of the pursuit that counts. And what is that precisely? Who is the English major in his ideal form? What does the English major have, what does he want, and what does he in the long run hope to become?
The English major is, first of all, a reader. She's got a book pup-tented in front of her nose many hours a day; her Kindle glows softly late into the night. But there are readers and there are readers. There are people who read to anesthetize themselves—they read to induce a vivid, continuous, and risk-free daydream. They read for the same reason that people grab a glass of chardonnay—to put a light buzz on. The English major reads because, as rich as the one life he has may be, one life is not enough. He reads not to see the world through the eyes of other people but effectively to become other people. What is it like to be John Milton, Jane Austen, Chinua Achebe? What is it like to be them at their best, at the top of their games?
English majors want the joy of seeing the world through the eyes of people who—let us admit it—are more sensitive, more articulate, shrewder, sharper, more alive than they themselves are. The experience of merging minds and hearts with Proust or James or Austen makes you see that there is more to the world than you had ever imagined. You see that life is bigger, sweeter, more tragic and intense—more alive with meaning than you had thought.
Real reading is reincarnation. There is no other way to put it. It is being born again into a higher form of consciousness than we ourselves possess. When we walk the streets of Manhattan with Walt Whitman or contemplate our hopes for eternity with Emily Dickinson, we are reborn into more ample and generous minds. "Life piled on life / Were all too little," says Tennyson's "Ulysses," and he is right. Given the ragged magnificence of the world, who would wish to live only once? The English major lives many times through the astounding transportive magic of words and the welcoming power of his receptive imagination. The economics major? In all probability he lives but once. If the English major has enough energy and openness of heart, he lives not once but hundreds of times. Not all books are worth being reincarnated into, to be sure—but those that are win Keats's sweet phrase: "a joy forever."
The economics major lives in facts and graphs and diagrams and projections. Fair enough. But the English major lives elsewhere. Remember the tale of that hoary patriarchal fish that David Foster Wallace made famous? The ancient swimmer swishes his slow bulk by a group of young carp suspended in the shallows. "How's the water?" the ancient asks. The carp keep their poise, like figures in a child's mobile, but say not a word. The old fish gone, one carp turns to another and says, "What the hell is water?"
The English major knows that the water we humans swim in is not any material entity. Our native habitat is language, words, and the English major swims through them with the old fin's enlivening awareness. But all of us, as the carp's remark suggests, live in a different relation to language. I'll put it a little tendentiously: Some of us speak, others are spoken. "Language speaks man," Heidegger famously said. To which I want to reply, Not all men, not all women: not by a long shot. Did language speak Shakespeare? Did language speak Spenser? Milton, Chaucer, Woolf, Emerson? No, not even close.
What does it mean to be spoken by language? It means to be a vehicle for expression and not a shaper of words. It means to rely on clichés and preformulated expressions. It means to be a channeler, of ad-speak, sports jargon, and the latest psychological babble. You sound not like a living man or a woman but like something much closer to a machine, trying to pass for human. You never know how you feel or what you want in life because the words at your disposal are someone else's and don't represent who you are and what you want. You don't and can't know yourself. You don't and can't know the world.
The businessman prattles about excellence, leadership, partnerships, and productivity. The athlete drones on about the game plan, the coach, one play at a time, and the inestimable blessing of having teammates who make it all possible. The politician pontificates about unity, opportunity, national greatness, and what's in it for the middle class. When such people talk, they are not so much human beings as tape loops.
The essayist John Jeremiah Sullivan catches this sort of sensibility in its extreme form in an essay about reality TV shows. There, verbal channeling reaches an almost unimaginable degree of intensity: "big mouths spewing fantastic catchphrase fountains of impenetrable self-justification." Yeah, that's about it.
The English major at her best isn't used by language; she uses it. She bends it, inflects it with irony, and lets hyperbole bloom like a firework flower when the time's right. She knows that language isn't there merely to represent the world but to interpret it. Language lets her say how she feels.
The English major believes in talk and writing and knows that any worthwhile event in life requires commentary and analysis in giant proportion. She believes that the uncommented-on life is not worth living. Then, of course, there is the commentary on the comments. There must be, as Eliot says, a hundred visions and revisions before the taking of the toast and tea—and a few after as well.
But I sometimes think that the English major's most habitual feeling about the linguistic solution in which she swims isn't practical at all. What she feels about language most of the time is wonder and gratitude. For language is a stupendous gift. It's been bequeathed to us by all of the foregoing generations. It is the creation of great souls like Shakespeare and Chaucer to be sure. But language is also the creation of salesmen and jive talkers, quacks and mountebanks, hookers and heroic warriors. We spend our lives, knowingly or not, trying to say something impeccably. We long to put the best words in the best order. (That, Coleridge said, is all that poetry really comes down to.) And when we do, we are on the lip of adding something to the language. We've perhaps made a contribution, however small, to what the critic R.P. Blackmur called the stock of available reality. And when we do, we've lived for a moment with the immortals. Poetry has been called the Olympics of language.
I love Wordsworth and Shakespeare and Donne. But I like it when a fellow pickup b-ball player points to a nervous guy skittering off to the bathroom just as the game's about to start: "He's taking a chicken pee." Yup—hit it on the head. I like it when, in the incomparable song "Juicy," Biggie Smalls describes coming up in life by letting us know that once "Birthdays was the worst days / Now we sip champagne when we thirs-tay." (And to advertise his sudden erotic ascent: "Honeys play me close like butter play toast.")
Language, a great poem in and of itself, is all around us. We live in the lap of enormous wonder, but how rarely do most of us look up and smile in gratitude and pleasure? The English major does that all the time.
The English major: in love with language and in love with life—or at least hungry for as much life as he can hold. But there's something else, too. The English major immerses himself in books and revels in language for a purpose. You might even call it a high purpose, if you're disposed to such talk. (I sometimes am.)
The English major wants to use what he knows about language and what he's learning from books as a way to confront the hardest of questions. He uses these things to try to figure out how to live. His life is an open-ended work in progress, and it's never quite done, at least until he is. For to the English major, the questions of life are never closed. There's always another book to read; there's always another perspective to add. He might think that he knows what's what as to love and marriage and the raising of children. But he's never quite sure. He takes tips from the wise and the almost wise that he confronts in books and sometimes (if he's lucky) in life. He measures them and sifts them and brings them to the court of his own experience. (There is a creative reading as well as a creative writing, Emerson said.)
He's always ready to change his mind. Darwin on nature, or Wordsworth? Freud on love, or Percy Bysshe Shelley? Blake on sex, or Arthur Schopenhauer? Or perhaps none of the above. He doesn't give up his view easily, but it's nonetheless always up for debate and open for change. He's an unfinished guy, she's an unfinished woman. Which can be embarrassing and discomfiting from time to time, when he's with the knowing ones, the certain ones: those who are, often in all too many ways, finished.
Love for language, hunger for life, openness and a quest for truth: Those are the qualities of my English major in the ideal form. But of course now we're talking about more than a mere academic major. We're talking about a way of life. We're talking about a way of living that places inquiry into how to live in the world—what to be, how to act, how to move through time—at its center.
What we're talking about is a path to becoming a human being, or at least a better sort of human being than one was at the start. An English major? To me an English major is someone who has decided, against all kinds of pious, prudent advice and all kinds of fears and resistances, to major, quite simply, in becoming a person. Once you've passed that particular course of study—or at least made some significant progress on your way—then maybe you're ready to take up something else.
100 Days
Kennedy, and What Might Have Been‘JFK’s Last Hundred Days,’ by Thurston Clarke
By MICHIKO KAKUTANI
Published: August 12, 2013
As the 50th anniversary this November of the assassination of John F. Kennedy looms on the horizon, the debates over his legacy and presidency continue: a procession of “what ifs” and “might have beens,” accompanied by contradictory arguments, and informed and not-so-informed speculation. Would Kennedy have avoided Lyndon B. Johnson’s tragic escalation of the war in Vietnam? Would he have found a way to propel his stalled tax-cut bill and civil rights legislation through Congress and start a war on poverty, or was Johnson able to achieve these historic goals only through a combination of his bare-knuckled, tactical knowledge of Congress; his personal relationships on Capitol Hill; and his ability to use the momentum of sentiment generated by Kennedy’s death?
JFK’S LAST HUNDRED DAYS
The Transformation of a Man and the Emergence of a Great President
By Thurston Clarke
Illustrated. 432 pages. The Penguin Press. $29.95.
.Enlarge This Image
Ellen Warner
Thurston Clarke
Enlarge This Image
Cecil Stoughton/White House, via John F. Kennedy Presidential Library and Museum, Boston
John F. Kennedy in Washington State during his Western conservation tour in 1963.
Several schools of argument have arisen. The former Kennedy speechwriter Theodore C. Sorensen and the aide Arthur M. Schlesinger Jr. focused on Kennedy’s record and the promise of his vision, creating a sort of bildungsroman portrait of the president, as learning and growing on the job.
Debunkers like Garry Wills and Seymour M. Hersh, by contrast, focused on the dark side of Camelot, suggesting that what they saw as Kennedy’s moral shortcomings and recklessness endangered the nation. More judicious and substantive accounts have been provided by Richard Reeves (“President Kennedy: Profile of Power”) and Robert Dallek (“An Unfinished Life: John F. Kennedy, 1917-1963”).
Thurston Clarke’s patchy and often reductive new book, “JFK’s Last Hundred Days,” draws heavily on the Dallek and Reeves books, while attempting to advance variations on arguments made by Sorensen and Schlesinger. Mr. Clarke contends that during that crucial period Kennedy was “finally beginning to realize his potential as a man and a president”; just as “ambition and realpolitik had characterized his congressional career and early White House years, morality and emotion tempered his ambitions during his last hundred days.”
Mr. Clarke also argues that during those days, Kennedy began to show his wife, Jacqueline, “the marriage they might have had,” arguing that the death of their premature infant, Patrick, in August 1963 had brought them closer together, and that he seemed to have curtailed his womanizing.
In Mr. Clarke’s view, two speeches the president gave in June 1963 — one proposing negotiations with Moscow to draft a nuclear test ban treaty, the other declaring that “race has no place in American life or law” — represented a turning point in his life, when he went from sailing with the winds of political expediency to embracing principle, as he described some of his heroes doing in “Profiles in Courage.”
Mr. Clarke made a similar argument about Robert F. Kennedy in his powerful 2008 book, “The Last Campaign,” writing that Robert appeared to begin that campaign as a homage to his brother but came into his own, speaking with an inspirational intensity and rawness rarely seen in politics about poverty, racial injustice and the country’s unhealed wounds. Others, too, have observed that the quick-tempered, hard-boiled Bobby — who’d worked for Senator Joseph McCarthy’s notorious Senate Permanent Subcommittee on Investigations in the 1950s and who’d been a tough enforcer on Jack’s 1960 campaign — became a more introspective, empathetic man after his brother’s assassination; grief and a passion for fighting for the poor had changed him.
This new book, though, lacks the visceral immediacy of “The Last Campaign,” and Mr. Clarke is less persuasive making a case for Jack Kennedy’s transformation in the last months of his life.
The idea of transformation is deeply appealing: we live in a culture that prizes reinvention and second acts. With John F. Kennedy, however, it’s difficult to make a case for dramatic change or to suggest that in June 1963 “he finally began to be more Irish than Harvard, governing from the heart as well as the head.”
It’s difficult partly because, as Mr. Clarke points out, Kennedy was “one of the most complicated and enigmatic men ever to occupy the White House”: a man who compartmentalized different aspects of his life and who frequently said and did contradictory things. His most essential quality, the literary critic Alfred Kazin is quoted as saying, was “that of the man who is always making and remaking himself.”
Kennedy’s opinions, too, could appear to mutate swiftly, and could be read in numerous ways. Much of the debate over what Kennedy would have eventually done about Vietnam — find a way to extricate the United States or listen to the same hard-liners who would help persuade Johnson to escalate American involvement — stems from wildly divergent remarks he made on the subject, remarks subject to a variety of interpretations.
Mr. Clarke says that Kennedy delivered a response to the CBS anchor Walter Cronkite in early September 1963 that was calculated to “prepare Americans for the possibility that the war might be unwinnable.” In the final analysis, Kennedy said of the South Vietnamese government: “It is their war. They are the ones who have to win or lose it.”
A week later, during an interview with David Brinkley and Chet Huntley of NBC News, Kennedy declared that he believed in the domino theory (which held that if South Vietnam fell, the rest of Southeast Asia would go Communist, too), then concluded: “I think we should stay. We should use our influence in as effective a way as we can, but we should not withdraw.” Mr. Clarke contends that this statement “bore no more resemblance to his real intentions than Roosevelt’s pledge not to involve America in the Second World War did to his,” adding, “Kennedy wanted to placate hawks in the Pentagon and Congress, just as Roosevelt had wanted to placate the isolationists.”
In his 2003 biography, Mr. Dallek wrote that Kennedy’s actions and statements “are suggestive of a carefully managed stand-down from the sort of involvement that occurred under L.B.J.,” but also noted that “no one can prove, of course, what Kennedy would have” actually done.
Mr. Clarke gives us an often vivid portrait of Kennedy as an immensely complex human being: by turns detached and charismatic, a hard-nosed pol and a closet romantic, cautious in his decision making but reckless in his womanizing. His book, however, lacks the granular detail and sober, appraising eye of Mr. Dallek’s volume. Too often, Mr. Clarke seems to be cherry-picking details and anecdotes that support his overarching thesis — that Kennedy began to hit his stride in his last 100 days, starting to emerge as “a great president” — rather than carefully assessing the historical record.
Mr. Clarke focuses, speculatively, on what Kennedy planned to do, rather than on what he achieved, writing that, among other things, the president “intended to travel to Moscow for a summit meeting with Khrushchev; launch a secret dialogue with Castro; explore the possibility of establishing a relationship with China; withdraw a thousand advisers from Vietnam by the end of 1963 and remove more during 1964; settle the cold war; end the threat of a nuclear war; launch an attack on poverty; pass his tax cut, civil rights and immigration bills; preside over the most robust, full-employment economy in American history; and continue marrying poetry to power and inspiring the young.”
Mr. Clarke plays down, even dismisses, Johnson’s extraordinary legislative mojo in getting Kennedy’s stalled initiatives passed, making the debatable assertion that Kennedy “would have succeeded in getting a civil rights bill through Congress, but perhaps not until after the election” of 1964. He also writes that the Great Society, Johnson’s domestic legislative program, “was largely a compendium of Kennedy’s bills and initiatives.”
Such efforts by Mr. Clarke to inflate Kennedy’s achievements distract from his actual accomplishments and influence, and they also make this intermittently interesting volume feel like a sentimental work of hagiography.
Subscribe to:
Posts (Atom)