Sunday, April 29, 2012
The Civil War Southern Defense of Slavery
--------------------------------------------------------------------------------
April 27, 2012, 12:30 pm
The South, the War and ‘Christian Slavery’
By THOM BASSETT
Disunion follows the Civil War as it unfolded.
Tags:
Christianity, Slavery, The Civil War
In the minds of many Southerners, the capture of New Orleans on April 25, 1862, by Union forces was more than simply a troubling military loss. It also raised the disturbing possibility that divine punishment was being inflicted on a spiritually wayward and sinful Confederacy.
The loss of the South’s most important port and largest city had followed on the heels of the loss of Tennessee’s Fort Henry and Fort Donelson in February and the ignominious retreat from Shiloh in early April. These setbacks, after the virtually uninterrupted Southern successes of 1861, caused many across the Confederacy to wonder, in the words of the South Carolina diarist Pauline DeCaradeuc Heyward, if “these reversals and terrible humiliations … come from Him to humble our hearts and remind us of our total helplessness without His aid.”
Such thinking was in fact typical of mid-19th-century America. With varying degrees of sophistication and conviction, Americans believed that the fates of individuals and nations unfolded in accordance with an unshakeable divine plan; all events, large and small, reflected God’s will and were expressions of his favor, testing or judgment. Of course, some, like the Confederate Edward Porter Alexander, would say of the conflict that “Providence did not care a pin about it.” But most Northerners and Southerners struggled throughout the Civil War to discern the purposes and intentions of their God.
While countless Union soldiers and northern civilians depended on theological narratives to sustain them, a providential view of history particularly influenced how Southerners reacted to and interpreted the events of the war. After all, the preamble to the Confederate constitution, unlike the federal one it replaced, explicitly invoked “the favor and guidance of Almighty God.” They were, Southerners believed, a people chosen by God to manifest His will on earth. “We are working out a great thought of God,” declared the South Carolina Episcopal theologian James Warley Miles, “namely the higher development of Humanity in its capacity for Constitutional Liberty.”
Miles held, though, that divine mandate extended beyond simply the Confederate interpretation of states’ rights, and that Southerners were bound by the Bible to seek more than merely “a selfish independence.” The Confederacy must “exhibit to the world that supremest effort of humanity” in creating and defending a society built upon obedience to biblical prescriptions regarding slavery, a society “sanctified by the divine spirit of Christianity.” In short, as the Episcopal Church in Virginia stated soon after the war began, Southerners were fighting “a Revolution, ecclesiastical as well as civil.” This would be a revolution that aimed to establish nothing less than, in the words of one Georgia woman, “the final and universal spread of Gospel civilization.”
This “Gospel civilization,” many believed, didn’t just permit slavery — it required it. Christians across the Confederacy were convinced that they were called not only to perpetuate slavery but also to “perfect” it. And they understood the Bible to provide clear moral guidelines on how to properly practice it. The Old Testament patriarchs owned slaves, Jewish law clearly assumed its permissibility and the Apostle Paul’s New Testament letters repeatedly compelled slaves to be obedient and loyal to their masters. Above all, as Southerners never tired of pointing out to their abolitionist foes, the Gospels fail to record any condemnation of the practice by Jesus Christ.
There is consequently a fascinating, if unsettling, paradox in the efforts of slaveholders to fulfill what they considered divinely imposed duties toward their slaves. Southern Christians believed that the Bible imposed on masters a host of obligations to their slaves. Most fundamentally, masters were to view slaves as fully members of their own households and as fellow brothers and sisters in the Lord. Therefore, as the South Carolina Methodist Conference declared before the war, masters sinned against their slaves by “excessive labor, extreme punishment, withholding necessary food and clothing, neglect in sickness or old age, and the like.”
Moreover, masters were not to let economic considerations govern treatment of their slaves. Religious leaders implored slaveholders to acknowledge that marriage and the family were divinely ordained and that, as a result, they must not separate husbands from wives or parents from children, even when it was financially advantageous to do so. (Almost no legislative action, however, resulted from these pleas; only the force of conscience would determine whether these biblical prescriptions were honored.) Many Southern Protestants advocated the repeal of laws banning slave literacy, so that slaves could read the Bible as a means to securing their eternal salvation. Before the war these practices were primarily justified as biblical mandates. Not coincidentally, though, they were also held out as a means to secure happier, more productive slaves, and to defang religious objections to the practice of slavery itself.
But beginning in the spring of 1862 and continuing even past the end of the war, the theological significance of “Christian slavery” changed. Southern pastors and theologians combined a providential view of history with their understanding of what was biblically required of slaveholders to conclude that widespread failure to engage in “Christian slavery” was a main cause of divine favor’s being withdrawn from God’s own chosen people. In other words, they blamed the fall of New Orleans on the excesses of slave owners — though never on slavery itself.
To be sure, these were not the only sins thought to bring down retribution on the Confederacy. “Blasphemy, Sabbath-breaking, selfishness,” the Lutheran Church of South Carolina warned, also brought God’s punishment in the form of battlefield setbacks, along with “avarice, hardness of heart, unbelief, and many other evils.” This was of course a well-thumbed catalog of sins preachers in both the North and the South had railed against for decades. But as the fighting wore on and Confederate fortunes darkened, Southerners grew increasingly afraid it was their treatment of slaves that caused God to turn against them.
For example, even while he was convinced God ultimately would vindicate the Confederacy, the influential Baptist minister Isaac Taylor Tichenor spoke for many Southerners when he addressed the Alabama General Assembly in 1863. “We have failed to discharge our duties to our slaves,” he charged. “Marriage is a divine institution, and yet marriage exists among our slaves dependent upon the will of the master,” leaving God’s command perversely “subject to the passion, avarice, or caprice of their owners.”
Similarly, the theologian and college president John Leadley Dagg saw Confederate setbacks as “fatherly chastisements, designed for our profit.” Nevertheless, he was insistent during the war that the failure to protect slave marriages “is only part of the general evil. We have not labored, in every possible way, to promote the welfare, for time and eternity of our slave population, as of dependent and helpless immortals whom God has placed in our power and in our mercy.” In September 1862 Bishop Stephen Elliott warned Southerners that the “great revolution through which we are passing certainly turns upon the point of slavery, and our future destiny is bound up with it. As we deal with it, so shall we prosper or suffer.”
A year after the war ended Dagg would insist that the Confederacy’s defeat was due to white Southerners’ failure to take proper care of their black charges. The war was “a scourge of God,” according to Bishop John McGill of Richmond, Va., inflicted on the Confederacy for its failure to respect slave marriages and protect slave families.
Not all Southerners agreed. Some, like the South Carolinian Louis Blanding, simply dismissed all efforts to explain God’s purposes in allowing the destruction of the South as “vague scholastic playthings, fit for the keen of edge discussion and of no earthly account.” Others, like the Tennessee diarist Eliza Fain, were mystified why God would permit the end of slavery when the Bible so clearly justifies it; still, she concluded, “we cannot know now, but he does and this is all we deserve to know.”
Many Southerners, though, came to embrace the interpretation of their history suggested by Elliott and made explicit by the Reverend J.C. Mitchell. “Read the annals of other nations,” the Alabaman admonished, “and see what destroyed them. It was not foreign force, but internal evil.” After the war, then, for countless chastened white Southern Christians, the evil that provoked the Lord to destroy their nation was the myriad wrongs committed against the slaves they had kept. Vanishingly few asked whether their true sin might be claiming to own those whom the Bible called their brothers and sisters in Christ.
Saturday, April 28, 2012
Cosmology and Philosophy
Lawrence M. KraussDirector of the Origins Project at Arizona State University; Author, 'A Universe From Nothing' Lawrence M. Krauss
The Consolation of Philosophy
Posted: 04/27/2012 5:49 pm
Recently, as a result of my most recent book, A Universe from Nothing, I participated in a wide-ranging and in-depth interview for The Atlantic on questions ranging from the nature of nothing to the best way to encourage people to learn about the fascinating new results in cosmology. The interview was based on the transcript of a recorded conversation and was hard hitting (and, from my point of view, the interviewer was impressive in his depth), but my friend Dan Dennett recently wrote to me to say that it has been interpreted (probably because it included some verbal off-the-cuff remarks, rather than carefully crafted written responses) by a number of his colleagues and readers as implying a blanket condemnation of philosophy as a discipline, something I had not intended.
Out of respect for Dan and those whom I may have unjustly offended, and because the relationship between physics and philosophy seems to be an area which has drawn some attention of late, I thought I would take the opportunity to write down, as coherently as possible, my own views on several of these issues, as a physicist and cosmologist. As I should also make clear (and as numerous individuals have not hesitated to comment upon already), I am not a philosopher, nor do I claim to be an expert on philosophy. Because of a lifetime of activity in the field of theoretical physics, ranging from particle physics to general relativity to astrophysics, I do claim however to have some expertise in the impact of philosophy on my own field. In any case, the level of my knowledge, and ignorance, will undoubtedly become clearer in what follows.
As both a general reader and as someone who is interested in ideas and culture, I have great respect for and have learned a great deal from a number of individuals who currently classify themselves as philosophers. Of course as a young person I read the classical philosophers, ranging from Plato to Descartes, but as an adult I have gained insights into the implications of brain functioning and developments in evolutionary psychology for understanding human behavior from colleagues such as Dan Dennett and Pat Churchland. I have been forced to re-examine my own attitudes towards various ethical issues, from the treatment of animals to euthanasia, by the cogent and thoughtful writing of Peter Singer. And reading the work of my friend A.C. Grayling has immeasurably heightened my understanding and appreciation of the human experience.
What I find common and so stimulating about the philosophical efforts of these intellectual colleagues is the way they thoughtfully reflect on human knowledge, amassed from empirical explorations in areas ranging from science to history, to clarify issues that are relevant to making decisions about how to function more effectively and happily as an individual, and as a member of a society.
As a practicing physicist however, the situation is somewhat different. There, I, and most of the colleagues with whom I have discussed this matter, have found that philosophical speculations about physics and the nature of science are not particularly useful, and have had little or no impact upon progress in my field. Even in several areas associated with what one can rightfully call the philosophy of science I have found the reflections of physicists to be more useful. For example, on the nature of science and the scientific method, I have found the insights offered by scientists who have chosen to write concretely about their experience and reflections, from Jacob Bronowski, to Richard Feynman, to Francis Crick, to Werner Heisenberg, Albert Einstein, and Sir James Jeans, to have provided me with a better practical guide than the work of even the most significant philosophical writers of whom I am aware, such as Karl Popper and Thomas Kuhn. I admit that this could primarily reflect of my own philosophical limitations, but I suspect this experience is more common than not among my scientific colleagues.
The one area of physics that has probably sparked the most 'philosophical' interest in recent times is the 'measurement' problem in quantum mechanics. How one moves from the remarkable and completely non-intuitive microscopic world where quantum mechanical indeterminacy reigns supreme and particles are doing many apparently inconsistent things at the same time, and are not localized in space or time, to the ordered classical world of our experience where baseballs and cannonballs have well-defined trajectories, is extremely subtle and complicated and the issues involved have probably not been resolved to the satisfaction of all practitioners in the field. And when one tries to apply the rules of quantum mechanics to an entire universe, in which a separation between observer and observed is not possible, the situation becomes even murkier.
However, even here, the most useful progress has been made, again in my experience, by physicists. The work of individuals such as Jim Hartle, and Murray Gell-Mann, Yakir Aharonov, Asher Peres, John Bell and others like them, who have done careful calculations associated with quantum measurement, has led to great progress in our appreciation of the subtle and confusing issues of translating an underlying quantum reality into the classical world we observe. There have been people who one can classify as philosophers who have contributed usefully to this discussion, such as Abner Shimony, but when they have, they have been essentially doing physics, and have published in physics journals (Shimony's work as a physicist is the work I am aware of). As far as the physical universe is concerned, mathematics and experiment, the tools of theoretical and experimental physics appear to be the only effective ways to address questions of principle.
Which brings me full circle to the question of nothing, and my own comments regarding the progress of philosophy in that regard. When it comes to the real operational issues that govern our understanding of physical reality, ontological definitions of classical philosophers are, in my opinion, sterile. Moreover, arguments based on authority, be it Aristotle, or Leibniz, are irrelevant. In science, there are no authorities, and appeal to quotes from brilliant scholars who lived before we knew the Earth orbited the Sun, or that space can be curved, or that dark matter or dark energy exist do not generally inform our current understanding of nature. Empirical explorations ultimately change our understanding of which questions are important and fruitful and which are not.
As a scientist, the fascination normally associated with the classically phrased question "why is there something rather than nothing?", is really contained in a specific operational question. That question can be phrased as follows: How can a universe full of galaxies and stars, and planets and people, including philosophers, arise naturally from an initial condition in which none of these objects -- no particles, no space, and perhaps no time -- may have existed? Put more succinctly perhaps: Why is there 'stuff', instead of empty space? Why is there space at all? There may be other ontological questions one can imagine but I think these are the 'miracles' of creation that are so non-intuitive and remarkable, and they are also the 'miracles' that physics has provided new insights about, and spurred by amazing discoveries, has changed the playing field of our knowledge. That we can even have plausible answers to these questions is worth celebrating and sharing more broadly.
In this regard, there is a class of philosophers, some theologically inspired, who object to the very fact that scientists might presume to address any version of this fundamental ontological issue. Recently one review of my book by such a philosopher, which I think motivated the questions in the Atlantic interview, argued not only that one particular version of the nothing described by modern physics was not relevant. Even more surprisingly, this author claimed with apparent authority (surprising because the author apparently has some background in physics) something that is simply wrong: that the laws of physics can never dynamically determine which particles and fields exist and whether space itself exists, or more generally what the nature of existence might be. But that is precisely what is possible in the context of modern quantum field theory in curved spacetime, where a phenomenon called 'spontaneous symmetry breaking' can determine dynamically which forces manifest themselves on large scales and which particles exist as stable states, and whether space itself can grow exponentially or not. Within the context of quantum gravity the same is presumably true for which sorts of universes can appear and persist. Within the context of string theory, a similar phenomenon might ultimately determine (indeed if the theory is ever to become predictive, it must determine) why universes might spontaneously arise with 4 large spacetime dimensions and not 5 or 6. One cannot tell from the review if the author actually read the book (since no mention of the relevant cosmology is made) or simply misunderstood it.
Theologians and both Christian and Muslim apologists have unfortunately since picked up on the ill-conceived claims of that review to argue that physics can therefore never really address the most profound 'theological' questions regarding our existence. (To be fair, I regret sometimes lumping all philosophers in with theologians because theology, aside from those parts that involve true historical or linguistic scholarship, is not credible field of modern scholarship.) It may be true that we can never fully resolved the infinite regression of 'why questions' that result whenever one assumes, a priori, that our universe must have some pre-ordained purpose. Or, to frame things in a more theological fashion: 'Why is our Universe necessary rather than contingent?'.
One answer to this latter question can come from physics. If all possibilities -- all universes with all laws -- can arise dynamically, and if anything that is not forbidden must arise, then this implies that both nothing and something must both exist, and we will of necessity find ourselves amidst something. A universe like ours is, in this context, guaranteed to arise dynamically, and we are here because we could not ask the question if our universe weren't here. It is in this sense that I argued that the seemingly profound question of why there is something rather than nothing might be actually no more profound than asking why some flowers are red or some are blue. I was surprised that this very claim was turned around by the reviewer as if it somehow invalidated this possible physical resolution of the something versus nothing conundrum.
Instead, sticking firm to the classical ontological definition of nothing as "the absence of anything" -- whatever this means -- so essential to theological, and some subset of philosophical intransigence, strikes me as essentially sterile, backward, useless and annoying. If "something" is a physical quantity, to be determined by experiment, then so is 'nothing'. It may be that even an eternal multiverse in which all universes and laws of nature arise dynamically will still leave open some 'why' questions, and therefore never fully satisfy theologians and some philosophers. But focusing on that issue and ignoring the remarkable progress we can make toward answering perhaps the most miraculous aspect of the something from nothing question -- understanding why there is 'stuff' and not empty space, why there is space at all, and how both stuff and space and even the forces we measure could arise from no stuff and no space -- is, in my opinion, impotent, and useless. It was in that sense -- the classical ontological claim about the nature of some abstract nothing, compared to the physical insights about this subject that have developed -- that I made the provocative, and perhaps inappropriately broad statement that this sort of philosophical speculation has not led to any progress over the centuries.
What I tried to do in my writing on this subject is carefully attempt to define precisely what scientists operationally mean by nothing, and to differentiate between what we know, and what is merely plausible, and what we might be able to probe in the future, and what we cannot. The rest is, to me, just noise.
So, to those philosophers I may have unjustly offended by seemingly blanket statements about the field, I apologize. I value your intelligent conversation and the insights of anyone who thinks carefully about our universe and who is willing to guide their thinking based on the evidence of reality. To those who wish to impose their definition of reality abstractly, independent of emerging empirical knowledge and the changing questions that go with it, and call that either philosophy or theology, I would say this: Please go on talking to each other, and let the rest of us get on with the goal of learning more about nature.
A Review of Ron Rash's "The Cove"
Fiction
Saturday, Apr 28, 2012 12:00 PM UTC
“The Cove”: A mysterious skull
A new novel begins with a shocking discovery that takes us back to love and life in the South during World War I
By Katherine A. Powers, Barnes & Noble Review
more
All Share Services
This article appears courtesy of The Barnes & Noble Review.
Ron Rash’s atmospheric, strangely uncomplicated novel, “The Cove,” begins with a scene of melancholy and abandonment, the promise of obliteration, and a shocking discovery. It is 1953 and a man called Parton, a scout for the Tennessee Valley Authority, is investigating a remote parcel of land in North Carolina’s Appalachia for inhabitants who will have to be evicted in advance of the valley’s inundation. In a small notch — from which the book takes its title — over which looms a light-exterminating, anvil-shaped cliff, he finds a deserted farm. Pasture fenced by sagging barbed wire, a collapsed barn, a cabin and two wells are the desolate relicts of past life and labor. The general doominess of the setting is further enhanced by an ash tree decked in charms against evil forces, dead American chestnut trees (victims of the plague that wiped them out across the land), and the memory of the now extinct Carolina parakeet. Parton, thirsty, manages to winch up a bucket of water from one of the wells — and with it a human skull.
I give little away in revealing this, as it occurs on page 4; it takes another 243 pages and a step back to the late summer and autumn of 1918 to discover the skull’s owner. It is then, during the last months of World War I, that the story takes place. At its heart is Laurel, a young woman afflicted with a large birthmark. She is shunned by the residents of the nearest town, Mars Hill, who believe that the cove is cursed and that she herself is a witch. Both her parents are dead, and with occasional help from a neighbor, she survived the previous summer alone on the farm while her brother, Hank, was away fighting in France. He has returned, absent a hand but resolutely capable and preparing for marriage.
In passage after passage, Rash describes life and work on the farm in its dailiness — the preparation of meals, tending to chores, mending clothes, setting fence poles, pulling wire — creating a sense of order and industry that would seem to promise future happiness and prosperity. But as the initial scene of desolation and death promises the reverse, an air of menace and foreboding pervades the story. And, indeed, like the waters that will inundate the farm decades later, powerful, destructive forces are gathering outside the cove.
On one of her forays to do her laundry in a stream away from the farm, Laurel hears and secretly observes a young man resting in a makeshift camp, playing a flute; days later she finds him near death, stung by a swarm of wasps. She brings him home; he recovers and produces a piece of paper saying that his name is Walter and that he cannot speak or read or write. As we — unlike Laurel or Hank — have already learned that a man has escaped from what turns out to be an internment camp for Germans, we get the picture. Walter won’t speak, but he will help with the farm, and this he does handily, capturing Hank’s admiration and gratitude — and Laurel’s heart.
All the while, anti-German hysteria is escalating in Mars Hill, a volatile temper encouraged by one Sgt. Chauncey Feith, a preposterous character ripped from a handbook of one-dimensional villains. Vainglorious, opportunistic and cowardly, he is a jingo, a sneak and a bully. The son of a politically connected banker, he has been deployed as the town’s recruitment officer, thus avoiding the perils of the battlefield. He has gone about this zealously, congratulating himself at every turn for sending young men off to the war and priding himself on being an “unsung hero, because you couldn’t go around telling people that any man can hold a rifle and stand in a trench but only a select few could do what a general or commodore or recruiter did.” That’s Chauncey Feith for you — believe it or not.
If Walter were to show up at Mars Hill and be recognized, there is no question that he would be strung up as a Hun. Meanwhile life and love go on at the farm. Walter helps Hank in sinking a second well, and the description of digging and lining it deep, deep in the earth is wonderfully potent. Indeed, Rash’s material detail, depiction of work and evocation of place — of nature, woods and stream, the play of light and the oppressive dark of the monstrous cliff — are truly splendid. Still, between the threat of a lynching and scenes from the cove, a vacuum yawns, and into it flows one simple question stripped of complexity: Whose skull? Or, put another way, happy ending or sad? The answer, when it comes, seems perfectly arbitrary.
Sunday, April 22, 2012
What Money Can't Buy
FROM Michael Sandel
What Money Can't Buy: The Skyboxification of American Life
Posted: 04/20/2012 8:42 pm
We live at a time when almost everything can be bought and sold. Over the past three decades, markets -- and market values -- have come to govern our lives as never before. We did not arrive at this condition through any deliberate choice. It is almost as if it came upon us.
As the cold war ended, markets and market thinking enjoyed unrivaled prestige, understandably so. No other mechanism for organizing the production and distribution of goods had proven as successful at generating affluence and prosperity. And yet, even as growing numbers of countries around the world embraced market mechanisms in the operation of their economies, something else was happening. Market values were coming to play a greater and greater role in social life. Today, the logic of buying and selling no longer applies to material goods alone but increasingly governs the whole of life. We have drifted from having a market economy to being a market society.
And while economists often assume that markets are inert, that they do not affect the goods they exchange, this is untrue. Markets leave their mark. Sometimes, market values crowd out nonmarket norms.
Of course, people disagree about the norms appropriate to many of the domains that markets have invaded -- family life, friendship, sex, procreation, health, education, nature, art, citizenship, sports, and the way we contend with the prospect of death. But that's the point: once we see that markets and commerce change the character of the good they touch, we have to ask where markets belong -- and where they don't. And we can't answer this question without deliberating about the meaning and purpose of goods, and the values that should govern them.
Such deliberations touch, unavoidably, on competing conceptions of the good life. This is terrain on which we sometimes fear to tread. For fear of disagreement, we hesitate to bring our moral and spiritual convictions into the public square. But shrinking from these questions does not leave them undecided. It simply means that markets will decide them for us. This is the lesson of the last three decades. The era of market triumphalism has coincided with a time when public discourse has been largely empty of moral and spiritual substance. Our only hope of keeping markets in their place is to deliberate openly and publicity about the meaning of the goods and social practices we prize.
In addition to debating the meaning of this or that good, we also need to ask a bigger question, about the kind of society in which we wish to live. As naming rights and municipal marketing appropriate the common world, they diminish its public character. Beyond the damage it does to particular goods, commercialism erodes commonality. The more things money can buy, the fewer the occasions when people from different walks of life encounter one another. We see this when we go to a baseball game and gaze up at the skyboxes, or down from them, as the case may be. The disappearance of the class-mixing experiment once found at the ballpark represents a loss not only for those looking up but also for those looking down.
Something similar has been happening throughout our society. At a time of rising inequality, the marketization of everything means that people of affluence and people of modest means lead increasingly separate lives. We live and work and shop and play in different places. Our children go to different schools. You might call it the skyboxification of American life. It's not good for democracy, nor is it a satisfying way to live.
Democracy does not require perfect equality, but it does require that citizens share in a common life. What matters is that people of different backgrounds and social positions encounter one another, and bump up against one another, in the course of everyday life. For this is how we learn to negotiate and abide our differences, and how we come to care for the common good.
And so, the question of markets is really a question about how we want to live together. Do we want a society where everything is up for sale? Or are there certain moral and civic goods that markets do not honor and money cannot buy?
Saturday, April 21, 2012
New Books
Yesterday I bought the new Ron Rash novel and found sportswriter Jim Murray's autobiography. I look forward to reading them both.
Friday, April 20, 2012
The Henry Aaron Biography
Howard Bryant – The Last Hero: A Life of Henry Aaron
The book starts out with Wilcox County, Alabama, one of the country’s biggest slave trade centers before the war. Herbert Aaron, Henry’s father, left Wilcox County for Mobile to give himself a better a shot at life. There was no reason to stay in Wilcox County. The author talks of the original Henry Aaron, our protagonist’s paternal grandfather. Little is said of his mother’s origins.
Henry’s mother Stella wanted him to go to college, but her son was not academically oriented. As a matter of fact, he did not finish his senior year of high school.
He wanted to play baseball. He had no Plan B for his life. A man named Ed Scott, who was connected to a Negro team called the Indianapolis Clowns, talked Stella into letting Henry give professional baseball a try. He told his mother he could come back finish high school if he didn’t make it in baseball. Father Herbert was on board with baseball. This was in 1951. P. 35
In the first quarter of the 20th century, more blacks were lynched in the state of Florida than any other deep south state. P. 36
Hard to believe that in the same general time period, growing up in Mobile were Henry Aaron, Willie McCovey, Billy Williams, and Tommy Agee, all great major league baseball players.
During his career, Aaron would be paired with and compared to Mays and Clemente (a fellow right fielder) and ultimately Babe Ruth, but his role model was always Jackie Robinson. P. 167
Robinson may have been his role model, but his idol was Musial. P. 179
In winning the 1957 World Series, the Braves were fighting the mystique of the New York Yankees. As that Series began, the nation was focused on events in Little Rock, Arkansas, and the integration Central High School. The leader of the Braves was Eddie Matthews, at that time 200 homers ahead of Ruth at the comparable point in his career, and it was believed that Matthews had the best chance to catch The Bambino. Henry Aaron was only 23 years old.
Henry Aaron obviously had an internal rivalry with Willie Mays. His “fire glowed with the sight of Mays in the other dugout.” P. 245
Henry Aaron liked Westerns.
He encountered James Baldwin at the 60’s height of the civil rights movement. From Baldwin he realized that the time to wait for equal rights was over. Aaron was to the left of his black contemporaries. P. 273-74
The recently deceased Furman Bisher wrote a demeaning article on Aaron for the “The Saturday Evening Post,” and as late as 2008 wrote that Aaron is a nice man but is “easily led.” Ouch! P. 279
Henry Aaron paid proper respect to Willie Mays, but The Say Hey Kid never returned the favor. The self-absorbed Mays was a great baseball player but is a jerk as a human being, unable to credit anyone but himself.
It seems that Henry Aaron has carried a lot of bitterness over a lot of things over the years, including his induction into the Hall of Fame in 1982. P. 461
After two years back in Milwaukee, Aaron returned to Atlanta and has had success in the business world. Good for him, but you get the impression that he still carries a lot of bitterness.
The book starts out with Wilcox County, Alabama, one of the country’s biggest slave trade centers before the war. Herbert Aaron, Henry’s father, left Wilcox County for Mobile to give himself a better a shot at life. There was no reason to stay in Wilcox County. The author talks of the original Henry Aaron, our protagonist’s paternal grandfather. Little is said of his mother’s origins.
Henry’s mother Stella wanted him to go to college, but her son was not academically oriented. As a matter of fact, he did not finish his senior year of high school.
He wanted to play baseball. He had no Plan B for his life. A man named Ed Scott, who was connected to a Negro team called the Indianapolis Clowns, talked Stella into letting Henry give professional baseball a try. He told his mother he could come back finish high school if he didn’t make it in baseball. Father Herbert was on board with baseball. This was in 1951. P. 35
In the first quarter of the 20th century, more blacks were lynched in the state of Florida than any other deep south state. P. 36
Hard to believe that in the same general time period, growing up in Mobile were Henry Aaron, Willie McCovey, Billy Williams, and Tommy Agee, all great major league baseball players.
During his career, Aaron would be paired with and compared to Mays and Clemente (a fellow right fielder) and ultimately Babe Ruth, but his role model was always Jackie Robinson. P. 167
Robinson may have been his role model, but his idol was Musial. P. 179
In winning the 1957 World Series, the Braves were fighting the mystique of the New York Yankees. As that Series began, the nation was focused on events in Little Rock, Arkansas, and the integration Central High School. The leader of the Braves was Eddie Matthews, at that time 200 homers ahead of Ruth at the comparable point in his career, and it was believed that Matthews had the best chance to catch The Bambino. Henry Aaron was only 23 years old.
Henry Aaron obviously had an internal rivalry with Willie Mays. His “fire glowed with the sight of Mays in the other dugout.” P. 245
Henry Aaron liked Westerns.
He encountered James Baldwin at the 60’s height of the civil rights movement. From Baldwin he realized that the time to wait for equal rights was over. Aaron was to the left of his black contemporaries. P. 273-74
The recently deceased Furman Bisher wrote a demeaning article on Aaron for the “The Saturday Evening Post,” and as late as 2008 wrote that Aaron is a nice man but is “easily led.” Ouch! P. 279
Henry Aaron paid proper respect to Willie Mays, but The Say Hey Kid never returned the favor. The self-absorbed Mays was a great baseball player but is a jerk as a human being, unable to credit anyone but himself.
It seems that Henry Aaron has carried a lot of bitterness over a lot of things over the years, including his induction into the Hall of Fame in 1982. P. 461
After two years back in Milwaukee, Aaron returned to Atlanta and has had success in the business world. Good for him, but you get the impression that he still carries a lot of bitterness.
Sunday, April 15, 2012
The Consequencies of Inequality
RecessionSunday, Apr 15, 2012 7:58 AM 09:20:33 CDT
Economy killers: Inequality and GOP ignorance
By Paul Krugman and Robin Wells
America emerged from the Great Depression and the Second World War with a much more equal distribution of income than it had in the 1920s; our society became middle-class in a way it hadn’t been before. This new, more equal society persisted for 30 years. But then we began pulling apart, with huge income gains for those with already high incomes. As the Congressional Budget Office has documented, the 1 percent — the group implicitly singled out in the slogan “We are the 99 percent” — saw its real income nearly quadruple between 1979 and 2007, dwarfing the very modest gains of ordinary Americans. Other evidence shows that within the 1 percent, the richest 0.1 percent and the richest 0.01 percent saw even larger gains.
By 2007, America was about as unequal as it had been on the eve of the Great Depression — and sure enough, just after hitting this milestone, we plunged into the worst slump since the Depression. This probably wasn’t a coincidence, although economists are still working on trying to understand the linkages between inequality and vulnerability to economic crisis.
Here, however, we want to focus on a different question: Why has the response to the crisis been so inadequate? Before financial crisis struck, we think it’s fair to say that most economists imagined that even if such a crisis were to happen, there would be a quick and effective policy response. In 2003 Robert Lucas, the Nobel laureate and then-president of the American Economic Association, urged the profession to turn its attention away from recessions to issues of longer-term growth. Why? Because, he declared, the “central problem of depression-prevention has been solved, for all practical purposes, and has in fact been solved for many decades.”
Yet when a real depression arrived — and what we are experiencing is indeed a depression, although not as bad as the Great Depression — policy failed to rise to the occasion. Yes, the banking system was bailed out. But job-creation efforts were grossly inadequate from the start — and far from responding to the predictable failure of the initial stimulus to produce a dramatic turnaround with further action, our political system turned its back on the unemployed. Between bitterly divisive politics that blocked just about every initiative from President Obama, and a bizarre shift of focus away from unemployment to budget deficits despite record-low borrowing costs, we have ended up repeating many of the mistakes that perpetuated the Great Depression.
Nor, by the way, were economists much help. Instead of offering a clear consensus, they produced a cacophony of views, with many conservative economists, in our view, allowing their political allegiance to dominate their professional competence. Distinguished economists made arguments against effective action that were evident nonsense to anyone who had taken Econ 101 and understood it. Among those behaving badly, by the way, was none other than Robert Lucas, the same economist who had declared just a few years before that the problem of preventing depressions was solved.
So how did we end up in this state? How did America become a nation that could not rise to the biggest economic challenge in three generations, a nation in which scorched-earth politics and politicized economics created policy paralysis?
We suggest it was the inequality that did it. Soaring inequality is at the root of our polarized politics, which made us unable to act together in the face of crisis. And because rising incomes at the top have also brought rising power to the wealthiest, our nation’s intellectual life has been warped, with too many economists co-opted into defending economic doctrines that were convenient for the wealthy despite being indefensible on logical and empirical grounds.
Let’s talk first about the link between inequality and polarization.
- – - – - – - – - – - – - – -
Our understanding of American political economy has been strongly influenced by the work of the political scientists Keith Poole, Howard Rosenthal and Nolan McCarty. Poole, Rosenthal and McCarty use congressional roll-call votes to produce a sort of “map” of political positions, in which both individual bills and individual politicians are assigned locations in an abstract issues space. The details are a bit complex, but the bottom line is that American politics is pretty much one-dimensional: Once you’ve determined where a politician lies on a left-right spectrum, you can predict his or her votes with a high degree of accuracy. You can also see how far apart the two parties’ members are on the left-right spectrum — that is, how polarized congressional politics is.
It’s not surprising that the parties have moved ever further apart since the 1970s. There used to be substantial overlap: There were moderate and even liberal Republicans, like New York’s Jacob Javits, and there were conservative Democrats. Today the parties are totally disjointed, with the most conservative Democrat to the left of the most liberal Republican, and the two parties’ centers of gravity very far apart.
What’s more surprising is the fact that the relatively nonpolarized politics of the post-war generation is a relatively recent phenomenon — before the war, and especially before the Great Depression, politics was almost as polarized as it is now. And the track of polarization closely follows the track of income inequality, with the degree of polarization closely correlated over time with the share of total income going to the top 1 percent.
Why does higher inequality seem to produce greater political polarization? Crucially, the widening gap between the parties has reflected Republicans moving right, not Democrats moving left. This pops out of the Poole-Rosenthal-McCarty numbers, but it’s also obvious from the history of various policy proposals. The Obama health care plan, to take an obvious example, was originally a Republican plan, in fact a plan devised by the Heritage Foundation. Now the GOP denounces it as socialism.
The most likely explanation of the relationship between inequality and polarization is that the increased income and wealth of a small minority has, in effect, bought the allegiance of a major political party. Republicans are encouraged and empowered to take positions far to the right of where they were a generation ago, because the financial power of the beneficiaries of their positions both provides an electoral advantage in terms of campaign funding and provides a sort of safety net for individual politicians, who can count on being supported in various ways even if they lose an election.
Whatever the precise channels of influence, the result is a political environment in which Mitch McConnell, leading Republican in the Senate, felt it was perfectly okay to declare before the 2010 midterm elections that his main goal, if the GOP won control, would be to incapacitate the president of the United States: “The single most important thing we want to achieve is for President Obama to be a one-term president.”
Needless to say, this is not an environment conducive to effective anti-depression policy, especially given the way Senate rules allow a cohesive minority to block much action. We know that the Obama administration expected to win strong bipartisan support for its stimulus plan, and that it also believed that it could go back for more if events proved this necessary. In fact, it took desperate maneuvering to get sixty votes even in the first round, and there was no question of getting more later.
In sum, extreme income inequality led to extreme political polarization, and this greatly hampered the policy response to the crisis. Even if we had entered the crisis in a state of intellectual clarity — with major political players at least grasping the nature of the crisis and the real policy options — the intensity of political conflict would have made it hard to mount an effective response.
In reality, of course, we did not enter the crisis in a state of clarity. To a remarkable extent, politicians — and, sad to say, many well-known economists — reacted to the crisis as if the Great Depression had never happened. Leading politicians gave speeches that could have come straight out of the mouth of Herbert Hoover; famous economists reinvented fallacies that one thought had been refuted in the mid-1930s. Why?
The answer, we would suggest, also runs back to inequality.
- – - – - – - – - – - – - – - – -
It’s clear that the financial crisis of 2008 was made possible in part by the systematic way in which financial regulation had been dismantled over the previous three decades. In retrospect, in fact, the era from the 1970s to 2008 was marked by a series of deregulation-induced crises, including the hugely expensive savings and loan crisis; it’s remarkable that the ideology of deregulation nonetheless went from strength to strength.
It seems likely that this persistence despite repeated disaster had a lot to do with rising inequality, with the causation running in both directions. On one side, the explosive growth of the financial sector was a major source of soaring incomes at the very top of the income distribution. On the other side, the fact that the very rich were the prime beneficiaries of deregulation meant that as this group gained power — simply because of its rising wealth — the push for deregulation intensified.
These impacts of inequality on ideology did not end in 2008. In an important sense, the rightward drift of ideas, both driven by and driving rising income concentration at the top, left us incapacitated in the face of crisis.
In 2008 we suddenly found ourselves living in a Keynesian world — that is, a world that very much had the features John Maynard Keynes focused on in his 1936 magnum opus, “The General Theory of Employment, Interest and Money.” By that we mean that we found ourselves in a world in which lack of sufficient demand had become the key economic problem, and in which narrow technocratic solutions, like cuts in the Federal Reserve’s interest rate target, were not adequate to that situation. To deal effectively with the crisis, we needed more activist government policies, in the form both of temporary spending to support employment and efforts to reduce the overhang of mortgage debt.
One might think that these solutions could still be considered technocratic, and separated from the broader question of income distribution. Keynes himself described his theory as “moderately conservative in its implications,” consistent with an economy run on the principles of private enterprise. From the beginning, however, political conservatives — and especially those most concerned with defending the position of the wealthy — have fiercely opposed Keynesian ideas.
And we mean fiercely. Although Paul Samuelson’s textbook “Economics: An Introductory Analysis” is widely credited with bringing Keynesian economics to American colleges in the 1940s, it was actually the second entry; a previous book, by the Canadian economist Lorie Tarshis, was effectively blackballed by rightwing opposition, including an organized campaign that successfully induced many universities to drop it. Later, in his “God and Man at Yale,” William F. Buckley Jr. would direct much of his ire at the university for allowing the teaching of Keynesian economics.
The tradition continues through the years. In 2005 the right-wing magazine Human Events listed Keynes’s “General Theory” among the 10 most harmful books of the 19th and 20th centuries, right up there with “Mein Kampf” and “Das Kapital.”
Why such animus against a book with a “moderately conservative” message? Part of the answer seems to be that even though the government intervention called for by Keynesian economics is modest and targeted, conservatives have always seen it as the thin edge of the wedge: concede that the government can play a useful role in fighting slumps, and the next thing you know we’ll be living under socialism. The rhetorical amalgamation of Keynesianism with central planning and radical redistribution — although explicitly denied by Keynes himself, who declared that “there are valuable human activities which require the motive of money-making and the environment of private wealth-ownership for their full fruition” — is almost universal on the right.
There is also the motive suggested by Keynes’s contemporary MichaÅ‚ Kalecki in a classic 1943 essay:
We shall deal first with the reluctance of the “captains of industry” to accept government intervention in the matter of employment. Every widening of state activity is looked upon by business with suspicion, but the creation of employment by government spending has a special aspect which makes the opposition particularly intense. Under a laissez-faire system the level of employment depends to a great extent on the so-called state of confidence. If this deteriorates, private investment declines, which results in a fall of output and employment (both directly and through the secondary effect of the fall in incomes upon consumption and investment). This gives the capitalists a powerful indirect control over government policy: everything which may shake the state of confidence must be carefully avoided because it would cause an economic crisis. But once the government learns the trick of increasing employment by its own purchases, this powerful controlling device loses its effectiveness. Hence budget deficits necessary to carry out government intervention must be regarded as perilous. The social function of the doctrine of “sound finance” is to make the level of employment dependent on the state of confidence.
This sounded a bit extreme to us the first time we read it, but it now seems all too plausible. These days you can see the “confidence” argument being deployed all the time. For example, here is how Mort Zuckerman began a 2010 op-ed in the Financial Times, aimed at dissuading President Obama from taking any kind of populist line:
The growing tension between the Obama administration and business is a cause for national concern. The president has lost the confidence of employers, whose worries over taxes and the increased costs of new regulation are holding back investment and growth. The government must appreciate that confidence is an imperative if business is to invest, take risks and put the millions of unemployed back to productive work.
There was and is, in fact, no evidence that “worries over taxes and the increased costs of new regulation” are playing any significant role in holding the economy back. Kalecki’s point, however, was that arguments like this would fall completely flat if there was widespread public acceptance of the notion that Keynesian policies could create jobs. So there is a special animus against direct government job-creation policies, above and beyond the generalized fear that Keynesian ideas might legitimize government intervention in general.
Put these motives together, and you can see why writers and institutions with close ties to the upper tail of the income distribution have been consistently hostile to Keynesian ideas. That has not changed over the 75 years since Keynes wrote the “General Theory.” What has changed, however, is the wealth and hence influence of that upper tail. These days, conservatives have moved far to the right even of Milton Friedman, who at least conceded that monetary policy could be an effective tool for stabilizing the economy. Views that were on the political fringe 40 years ago are now part of the received doctrine of one of our two major political parties.
A touchier subject is the extent to which the vested interest of the 1 percent, or better yet the 0.1 percent, has colored the discussion among academic economists. But surely that influence must have been there: if nothing else, the preferences of university donors, the availability of fellowships and lucrative consulting contracts, and so on must have encouraged the profession not just to turn away from Keynesian ideas but to forget much that had been learned in the 1930s and ’40s.
In the debate over responses to the Great Recession and its aftermath, it has been shocking to see so many highly credentialed economists making not just elementary conceptual errors but old elementary conceptual errors — the same errors Keynes took on three generations ago. For example, one thought that nobody in the modern economics profession would repeat the mistakes of the infamous “Treasury view,” under which any increase in government spending necessarily crowds out an equal amount of private spending, no matter what the economic conditions might be. Yet in 2009, exactly that fallacy was expounded by distinguished professors at the University of Chicago.
Again, our point is that the dramatic rise in the incomes of the very affluent left us ill prepared to deal with the current crisis. We arrived at a Keynesian crisis demanding a Keynesian solution — but Keynesian ideas had been driven out of the national discourse, in large part because they were politically inconvenient for the increasingly empowered 1 percent.
In summary, then, the role of rising inequality in creating the economic crisis of 2008 is debatable; it probably did play an important role, if nothing else than by encouraging the financial deregulation that set the stage for crisis. What seems very clear to us, however, is that rising inequality played a central role in causing an ineffective response once crisis hit. Inequality bred a polarized political system, in which the right went all out to block any and all efforts by a modestly liberal president to do something about job creation. And rising inequality also gave rise to what we have called a Dark Age of macroeconomics, in which hard-won insights about how depressions happen and what to do about them were driven out of the national discourse, even in academic circles.
This implies, we believe, that the issue of inequality and the problem of economic recovery are not as separate as a purely economic analysis might suggest. We’re not going to have a good macroeconomic policy again unless inequality, and its distorting effect on policy debate, can be curbed.
Economy killers: Inequality and GOP ignorance
By Paul Krugman and Robin Wells
America emerged from the Great Depression and the Second World War with a much more equal distribution of income than it had in the 1920s; our society became middle-class in a way it hadn’t been before. This new, more equal society persisted for 30 years. But then we began pulling apart, with huge income gains for those with already high incomes. As the Congressional Budget Office has documented, the 1 percent — the group implicitly singled out in the slogan “We are the 99 percent” — saw its real income nearly quadruple between 1979 and 2007, dwarfing the very modest gains of ordinary Americans. Other evidence shows that within the 1 percent, the richest 0.1 percent and the richest 0.01 percent saw even larger gains.
By 2007, America was about as unequal as it had been on the eve of the Great Depression — and sure enough, just after hitting this milestone, we plunged into the worst slump since the Depression. This probably wasn’t a coincidence, although economists are still working on trying to understand the linkages between inequality and vulnerability to economic crisis.
Here, however, we want to focus on a different question: Why has the response to the crisis been so inadequate? Before financial crisis struck, we think it’s fair to say that most economists imagined that even if such a crisis were to happen, there would be a quick and effective policy response. In 2003 Robert Lucas, the Nobel laureate and then-president of the American Economic Association, urged the profession to turn its attention away from recessions to issues of longer-term growth. Why? Because, he declared, the “central problem of depression-prevention has been solved, for all practical purposes, and has in fact been solved for many decades.”
Yet when a real depression arrived — and what we are experiencing is indeed a depression, although not as bad as the Great Depression — policy failed to rise to the occasion. Yes, the banking system was bailed out. But job-creation efforts were grossly inadequate from the start — and far from responding to the predictable failure of the initial stimulus to produce a dramatic turnaround with further action, our political system turned its back on the unemployed. Between bitterly divisive politics that blocked just about every initiative from President Obama, and a bizarre shift of focus away from unemployment to budget deficits despite record-low borrowing costs, we have ended up repeating many of the mistakes that perpetuated the Great Depression.
Nor, by the way, were economists much help. Instead of offering a clear consensus, they produced a cacophony of views, with many conservative economists, in our view, allowing their political allegiance to dominate their professional competence. Distinguished economists made arguments against effective action that were evident nonsense to anyone who had taken Econ 101 and understood it. Among those behaving badly, by the way, was none other than Robert Lucas, the same economist who had declared just a few years before that the problem of preventing depressions was solved.
So how did we end up in this state? How did America become a nation that could not rise to the biggest economic challenge in three generations, a nation in which scorched-earth politics and politicized economics created policy paralysis?
We suggest it was the inequality that did it. Soaring inequality is at the root of our polarized politics, which made us unable to act together in the face of crisis. And because rising incomes at the top have also brought rising power to the wealthiest, our nation’s intellectual life has been warped, with too many economists co-opted into defending economic doctrines that were convenient for the wealthy despite being indefensible on logical and empirical grounds.
Let’s talk first about the link between inequality and polarization.
- – - – - – - – - – - – - – -
Our understanding of American political economy has been strongly influenced by the work of the political scientists Keith Poole, Howard Rosenthal and Nolan McCarty. Poole, Rosenthal and McCarty use congressional roll-call votes to produce a sort of “map” of political positions, in which both individual bills and individual politicians are assigned locations in an abstract issues space. The details are a bit complex, but the bottom line is that American politics is pretty much one-dimensional: Once you’ve determined where a politician lies on a left-right spectrum, you can predict his or her votes with a high degree of accuracy. You can also see how far apart the two parties’ members are on the left-right spectrum — that is, how polarized congressional politics is.
It’s not surprising that the parties have moved ever further apart since the 1970s. There used to be substantial overlap: There were moderate and even liberal Republicans, like New York’s Jacob Javits, and there were conservative Democrats. Today the parties are totally disjointed, with the most conservative Democrat to the left of the most liberal Republican, and the two parties’ centers of gravity very far apart.
What’s more surprising is the fact that the relatively nonpolarized politics of the post-war generation is a relatively recent phenomenon — before the war, and especially before the Great Depression, politics was almost as polarized as it is now. And the track of polarization closely follows the track of income inequality, with the degree of polarization closely correlated over time with the share of total income going to the top 1 percent.
Why does higher inequality seem to produce greater political polarization? Crucially, the widening gap between the parties has reflected Republicans moving right, not Democrats moving left. This pops out of the Poole-Rosenthal-McCarty numbers, but it’s also obvious from the history of various policy proposals. The Obama health care plan, to take an obvious example, was originally a Republican plan, in fact a plan devised by the Heritage Foundation. Now the GOP denounces it as socialism.
The most likely explanation of the relationship between inequality and polarization is that the increased income and wealth of a small minority has, in effect, bought the allegiance of a major political party. Republicans are encouraged and empowered to take positions far to the right of where they were a generation ago, because the financial power of the beneficiaries of their positions both provides an electoral advantage in terms of campaign funding and provides a sort of safety net for individual politicians, who can count on being supported in various ways even if they lose an election.
Whatever the precise channels of influence, the result is a political environment in which Mitch McConnell, leading Republican in the Senate, felt it was perfectly okay to declare before the 2010 midterm elections that his main goal, if the GOP won control, would be to incapacitate the president of the United States: “The single most important thing we want to achieve is for President Obama to be a one-term president.”
Needless to say, this is not an environment conducive to effective anti-depression policy, especially given the way Senate rules allow a cohesive minority to block much action. We know that the Obama administration expected to win strong bipartisan support for its stimulus plan, and that it also believed that it could go back for more if events proved this necessary. In fact, it took desperate maneuvering to get sixty votes even in the first round, and there was no question of getting more later.
In sum, extreme income inequality led to extreme political polarization, and this greatly hampered the policy response to the crisis. Even if we had entered the crisis in a state of intellectual clarity — with major political players at least grasping the nature of the crisis and the real policy options — the intensity of political conflict would have made it hard to mount an effective response.
In reality, of course, we did not enter the crisis in a state of clarity. To a remarkable extent, politicians — and, sad to say, many well-known economists — reacted to the crisis as if the Great Depression had never happened. Leading politicians gave speeches that could have come straight out of the mouth of Herbert Hoover; famous economists reinvented fallacies that one thought had been refuted in the mid-1930s. Why?
The answer, we would suggest, also runs back to inequality.
- – - – - – - – - – - – - – - – -
It’s clear that the financial crisis of 2008 was made possible in part by the systematic way in which financial regulation had been dismantled over the previous three decades. In retrospect, in fact, the era from the 1970s to 2008 was marked by a series of deregulation-induced crises, including the hugely expensive savings and loan crisis; it’s remarkable that the ideology of deregulation nonetheless went from strength to strength.
It seems likely that this persistence despite repeated disaster had a lot to do with rising inequality, with the causation running in both directions. On one side, the explosive growth of the financial sector was a major source of soaring incomes at the very top of the income distribution. On the other side, the fact that the very rich were the prime beneficiaries of deregulation meant that as this group gained power — simply because of its rising wealth — the push for deregulation intensified.
These impacts of inequality on ideology did not end in 2008. In an important sense, the rightward drift of ideas, both driven by and driving rising income concentration at the top, left us incapacitated in the face of crisis.
In 2008 we suddenly found ourselves living in a Keynesian world — that is, a world that very much had the features John Maynard Keynes focused on in his 1936 magnum opus, “The General Theory of Employment, Interest and Money.” By that we mean that we found ourselves in a world in which lack of sufficient demand had become the key economic problem, and in which narrow technocratic solutions, like cuts in the Federal Reserve’s interest rate target, were not adequate to that situation. To deal effectively with the crisis, we needed more activist government policies, in the form both of temporary spending to support employment and efforts to reduce the overhang of mortgage debt.
One might think that these solutions could still be considered technocratic, and separated from the broader question of income distribution. Keynes himself described his theory as “moderately conservative in its implications,” consistent with an economy run on the principles of private enterprise. From the beginning, however, political conservatives — and especially those most concerned with defending the position of the wealthy — have fiercely opposed Keynesian ideas.
And we mean fiercely. Although Paul Samuelson’s textbook “Economics: An Introductory Analysis” is widely credited with bringing Keynesian economics to American colleges in the 1940s, it was actually the second entry; a previous book, by the Canadian economist Lorie Tarshis, was effectively blackballed by rightwing opposition, including an organized campaign that successfully induced many universities to drop it. Later, in his “God and Man at Yale,” William F. Buckley Jr. would direct much of his ire at the university for allowing the teaching of Keynesian economics.
The tradition continues through the years. In 2005 the right-wing magazine Human Events listed Keynes’s “General Theory” among the 10 most harmful books of the 19th and 20th centuries, right up there with “Mein Kampf” and “Das Kapital.”
Why such animus against a book with a “moderately conservative” message? Part of the answer seems to be that even though the government intervention called for by Keynesian economics is modest and targeted, conservatives have always seen it as the thin edge of the wedge: concede that the government can play a useful role in fighting slumps, and the next thing you know we’ll be living under socialism. The rhetorical amalgamation of Keynesianism with central planning and radical redistribution — although explicitly denied by Keynes himself, who declared that “there are valuable human activities which require the motive of money-making and the environment of private wealth-ownership for their full fruition” — is almost universal on the right.
There is also the motive suggested by Keynes’s contemporary MichaÅ‚ Kalecki in a classic 1943 essay:
We shall deal first with the reluctance of the “captains of industry” to accept government intervention in the matter of employment. Every widening of state activity is looked upon by business with suspicion, but the creation of employment by government spending has a special aspect which makes the opposition particularly intense. Under a laissez-faire system the level of employment depends to a great extent on the so-called state of confidence. If this deteriorates, private investment declines, which results in a fall of output and employment (both directly and through the secondary effect of the fall in incomes upon consumption and investment). This gives the capitalists a powerful indirect control over government policy: everything which may shake the state of confidence must be carefully avoided because it would cause an economic crisis. But once the government learns the trick of increasing employment by its own purchases, this powerful controlling device loses its effectiveness. Hence budget deficits necessary to carry out government intervention must be regarded as perilous. The social function of the doctrine of “sound finance” is to make the level of employment dependent on the state of confidence.
This sounded a bit extreme to us the first time we read it, but it now seems all too plausible. These days you can see the “confidence” argument being deployed all the time. For example, here is how Mort Zuckerman began a 2010 op-ed in the Financial Times, aimed at dissuading President Obama from taking any kind of populist line:
The growing tension between the Obama administration and business is a cause for national concern. The president has lost the confidence of employers, whose worries over taxes and the increased costs of new regulation are holding back investment and growth. The government must appreciate that confidence is an imperative if business is to invest, take risks and put the millions of unemployed back to productive work.
There was and is, in fact, no evidence that “worries over taxes and the increased costs of new regulation” are playing any significant role in holding the economy back. Kalecki’s point, however, was that arguments like this would fall completely flat if there was widespread public acceptance of the notion that Keynesian policies could create jobs. So there is a special animus against direct government job-creation policies, above and beyond the generalized fear that Keynesian ideas might legitimize government intervention in general.
Put these motives together, and you can see why writers and institutions with close ties to the upper tail of the income distribution have been consistently hostile to Keynesian ideas. That has not changed over the 75 years since Keynes wrote the “General Theory.” What has changed, however, is the wealth and hence influence of that upper tail. These days, conservatives have moved far to the right even of Milton Friedman, who at least conceded that monetary policy could be an effective tool for stabilizing the economy. Views that were on the political fringe 40 years ago are now part of the received doctrine of one of our two major political parties.
A touchier subject is the extent to which the vested interest of the 1 percent, or better yet the 0.1 percent, has colored the discussion among academic economists. But surely that influence must have been there: if nothing else, the preferences of university donors, the availability of fellowships and lucrative consulting contracts, and so on must have encouraged the profession not just to turn away from Keynesian ideas but to forget much that had been learned in the 1930s and ’40s.
In the debate over responses to the Great Recession and its aftermath, it has been shocking to see so many highly credentialed economists making not just elementary conceptual errors but old elementary conceptual errors — the same errors Keynes took on three generations ago. For example, one thought that nobody in the modern economics profession would repeat the mistakes of the infamous “Treasury view,” under which any increase in government spending necessarily crowds out an equal amount of private spending, no matter what the economic conditions might be. Yet in 2009, exactly that fallacy was expounded by distinguished professors at the University of Chicago.
Again, our point is that the dramatic rise in the incomes of the very affluent left us ill prepared to deal with the current crisis. We arrived at a Keynesian crisis demanding a Keynesian solution — but Keynesian ideas had been driven out of the national discourse, in large part because they were politically inconvenient for the increasingly empowered 1 percent.
In summary, then, the role of rising inequality in creating the economic crisis of 2008 is debatable; it probably did play an important role, if nothing else than by encouraging the financial deregulation that set the stage for crisis. What seems very clear to us, however, is that rising inequality played a central role in causing an ineffective response once crisis hit. Inequality bred a polarized political system, in which the right went all out to block any and all efforts by a modestly liberal president to do something about job creation. And rising inequality also gave rise to what we have called a Dark Age of macroeconomics, in which hard-won insights about how depressions happen and what to do about them were driven out of the national discourse, even in academic circles.
This implies, we believe, that the issue of inequality and the problem of economic recovery are not as separate as a purely economic analysis might suggest. We’re not going to have a good macroeconomic policy again unless inequality, and its distorting effect on policy debate, can be curbed.
Saturday, April 14, 2012
Howard Bryant - The Last Hero: A Life of Henry Aaron
I seem to be on this run of reading sports biographies. The best ones are baseball biographies. This is the sport I understand best. This is a good one so far. I was always partial to Henry Aaron, thinking he was just as good as Mantle and Mays but didn't get the recognition he deserved even after breaking Ruth's record. More later.
Wednesday, April 11, 2012
Conservatism and Race
Conservatism is inherently rascist for the reasons outlined below.
The Right's Racist Problem
Timothy Noah
The Two Americas The Right's Racist Problem Etch-A-Sketch Watch April 11, 2012
Is Bernstein Lukewarm On Buffett Rule?National Review has jettisoned another writer for associating himself with racism. Robert Weissberg, an occasional contributor to the magazine's Phi Beta Cons blog, will no longer do so, Rich Lowry has declared, because he "participated in an American Renaissance conference where he delivered a noxious talk about the future of white nationalism." Whoops! This comes hard on the heels of John Derbyshire's dismissal, also for racially offensive commentary. By the logic of newsmagazine commentary, one more racist at National Review will give us a trend. But I'm not sure we really need one more. Ann Coulter was dismissed years ago for following up a column expressing religious intolerance toward Muslims with one making snarky reference to "swarthy males." A decade before that, William F. Buckley fired the late Joe Sobran on grounds of anti-Semitism. Indeed, National Review has a laudable tradition, going back to its founding, of disassociating itself with racists (even though some of its own early editorial comment about race was fairly reprehensible). So, to some extent, does the right in general. Remember when Sen. Trent Lott made his fatuous comment, in 2002, praising Sen. Strom Thurmond's 1948 Dixiecrat campaign for president? He lost the Republican leadership over that. It wasn't liberals who demanded he step down; we liberals were only to happy to see such an obviously unacceptable character retain a leadership position in the GOP. He was jettisoned by fellow Republicans, who were extremely sensitive to any accusation that the GOP was racist.
The question is why this should be so frequently necessary. Conservatism isn't an inherently racist belief system, and most conservatives are no more racist than most liberals are. But it is true that if you're a racist you're likely to gravitate toward conservatism, and toward the Republican party, for certain fairly obvious reasons. Its modern resurgence after the 1964 Civil Rights Bill (passed with more Republican support than Democratic, though opposed by that year's Republican presidential nominee) was fueled by southern white migration from the Democratic party. The Republican party's small-government philosophy has limited federal interference with discriminatory practices at the state and local level, and with racial bigotry generally. The Republican majority on the Supreme Court will likely soon abolish affirmative action altogether. Charles Murray, who two decades ago published a book arguing that blacks were intellectually inferior to whites, is beloved by conservative commentators ("Arguably the most consequential social scientist alive"--Jonah Goldberg) even as the larger social science community regards Murray as a crackpot. The Republican party's tax policies favor rich "job creators," who tend disproportionately to be white, and its opposition to the welfare state--initially to cash transfers, then to non-cash assistance like food stamps, and finally even to unemployment benefits--tends to harm lower-income people, who tend disproportionately to be black. The Republican party's criminal justice policies have put an appallingly high proportion of black men in jail, often for petty drug offenses. Republicans tend to favor the death penalty, which leads to execution of a disproportionate number of blacks. Southern Republicans struggle to suppress a dewey-eyed sentimentality for the Confederacy, even though their party was founded in opposition slavery, and even though its greatest leader, Abraham Lincoln, fought a war against Confederate succession. None of these affinities or policies is inherently racist. But taken together, they're going to be a lot more attractive to racists than the liberal policies of the Democratic party.
Dismissing the occasional racist is one way to deal with the problem. Reconsidering aspects of its ideology that repel African Americans and other minorities and attract allies with toxic views on race would be another. But that won't happen anytime soon.
The Right's Racist Problem
Timothy Noah
The Two Americas The Right's Racist Problem Etch-A-Sketch Watch April 11, 2012
Is Bernstein Lukewarm On Buffett Rule?National Review has jettisoned another writer for associating himself with racism. Robert Weissberg, an occasional contributor to the magazine's Phi Beta Cons blog, will no longer do so, Rich Lowry has declared, because he "participated in an American Renaissance conference where he delivered a noxious talk about the future of white nationalism." Whoops! This comes hard on the heels of John Derbyshire's dismissal, also for racially offensive commentary. By the logic of newsmagazine commentary, one more racist at National Review will give us a trend. But I'm not sure we really need one more. Ann Coulter was dismissed years ago for following up a column expressing religious intolerance toward Muslims with one making snarky reference to "swarthy males." A decade before that, William F. Buckley fired the late Joe Sobran on grounds of anti-Semitism. Indeed, National Review has a laudable tradition, going back to its founding, of disassociating itself with racists (even though some of its own early editorial comment about race was fairly reprehensible). So, to some extent, does the right in general. Remember when Sen. Trent Lott made his fatuous comment, in 2002, praising Sen. Strom Thurmond's 1948 Dixiecrat campaign for president? He lost the Republican leadership over that. It wasn't liberals who demanded he step down; we liberals were only to happy to see such an obviously unacceptable character retain a leadership position in the GOP. He was jettisoned by fellow Republicans, who were extremely sensitive to any accusation that the GOP was racist.
The question is why this should be so frequently necessary. Conservatism isn't an inherently racist belief system, and most conservatives are no more racist than most liberals are. But it is true that if you're a racist you're likely to gravitate toward conservatism, and toward the Republican party, for certain fairly obvious reasons. Its modern resurgence after the 1964 Civil Rights Bill (passed with more Republican support than Democratic, though opposed by that year's Republican presidential nominee) was fueled by southern white migration from the Democratic party. The Republican party's small-government philosophy has limited federal interference with discriminatory practices at the state and local level, and with racial bigotry generally. The Republican majority on the Supreme Court will likely soon abolish affirmative action altogether. Charles Murray, who two decades ago published a book arguing that blacks were intellectually inferior to whites, is beloved by conservative commentators ("Arguably the most consequential social scientist alive"--Jonah Goldberg) even as the larger social science community regards Murray as a crackpot. The Republican party's tax policies favor rich "job creators," who tend disproportionately to be white, and its opposition to the welfare state--initially to cash transfers, then to non-cash assistance like food stamps, and finally even to unemployment benefits--tends to harm lower-income people, who tend disproportionately to be black. The Republican party's criminal justice policies have put an appallingly high proportion of black men in jail, often for petty drug offenses. Republicans tend to favor the death penalty, which leads to execution of a disproportionate number of blacks. Southern Republicans struggle to suppress a dewey-eyed sentimentality for the Confederacy, even though their party was founded in opposition slavery, and even though its greatest leader, Abraham Lincoln, fought a war against Confederate succession. None of these affinities or policies is inherently racist. But taken together, they're going to be a lot more attractive to racists than the liberal policies of the Democratic party.
Dismissing the occasional racist is one way to deal with the problem. Reconsidering aspects of its ideology that repel African Americans and other minorities and attract allies with toxic views on race would be another. But that won't happen anytime soon.
Monday, April 9, 2012
Conservatism = Low Effort Thinking
April 9, 2012
Conservative Politics, 'Low-Effort' Thinking Linked In New Study
The Huffington Post | By David Freeman
Posted: 04/ 9/2012 10:44 am Updated: 04/ 9/2012 2:16 pm
As The Huffington Post reported in February, a study published in the journal "Psychological Science" showed that children who score low on intelligence tests gravitate toward socially conservative political views in adulthood--perhaps because conservative ideologies stress "structure and order" that make it easier to understand a complicated world.
Ouch.
And now there's the new study linking conservative ideologies to "low-effort" thinking.
"People endorse conservative ideology more when they have to give a first or fast response," the study's lead author, University of Arkansas psychologist Dr. Scott Eidelman, said in a written statement released by the university.
Does the finding suggest that conservatives are lazy thinkers?
"Not quite," Dr. Eidelman told The Huffington Post in an email. "Our research shows that low-effort thought promotes political conservatism, not that political conservatives use low-effort thinking."
For the study, a team of psychologists led by Dr. Eidelman asked people about their political viewpoints in a bar and in a laboratory setting.
Bar patrons were asked about social issues before blowing into a Breathalyzer. As it turned out, the political viewpoints of patrons with high blood alcohol levels were more likely to be conservative than were those of patrons whose blood alcohol levels were low.
But it wasn't just the alcohol talking, according to the statement. When the researchers conducted similar interviews in the lab, they found that people who were asked to evaluate political ideas quickly or while distracted were more likely to express conservative viewpoints.
"Keeping people from thinking too much...or just asking them to deliberate or consider information in a cursory manner can impact people's political attitudes, and in a way that consistently promotes political conservatism," Dr. Eidelman said in the email.
The study was published online in the journal "Personality and Social Psychology Bulletin."
What do you think? Are conservatives less intelligent than liberals--or more intelligent? And is conservatism a matter of lazy thinking?
Conservative Politics, 'Low-Effort' Thinking Linked In New Study
The Huffington Post | By David Freeman
Posted: 04/ 9/2012 10:44 am Updated: 04/ 9/2012 2:16 pm
As The Huffington Post reported in February, a study published in the journal "Psychological Science" showed that children who score low on intelligence tests gravitate toward socially conservative political views in adulthood--perhaps because conservative ideologies stress "structure and order" that make it easier to understand a complicated world.
Ouch.
And now there's the new study linking conservative ideologies to "low-effort" thinking.
"People endorse conservative ideology more when they have to give a first or fast response," the study's lead author, University of Arkansas psychologist Dr. Scott Eidelman, said in a written statement released by the university.
Does the finding suggest that conservatives are lazy thinkers?
"Not quite," Dr. Eidelman told The Huffington Post in an email. "Our research shows that low-effort thought promotes political conservatism, not that political conservatives use low-effort thinking."
For the study, a team of psychologists led by Dr. Eidelman asked people about their political viewpoints in a bar and in a laboratory setting.
Bar patrons were asked about social issues before blowing into a Breathalyzer. As it turned out, the political viewpoints of patrons with high blood alcohol levels were more likely to be conservative than were those of patrons whose blood alcohol levels were low.
But it wasn't just the alcohol talking, according to the statement. When the researchers conducted similar interviews in the lab, they found that people who were asked to evaluate political ideas quickly or while distracted were more likely to express conservative viewpoints.
"Keeping people from thinking too much...or just asking them to deliberate or consider information in a cursory manner can impact people's political attitudes, and in a way that consistently promotes political conservatism," Dr. Eidelman said in the email.
The study was published online in the journal "Personality and Social Psychology Bulletin."
What do you think? Are conservatives less intelligent than liberals--or more intelligent? And is conservatism a matter of lazy thinking?
Sunday, April 8, 2012
Origin of Speciousness
April 8, 2012, 5:18 pm
Origin of Speciousness
I was unhappy with President Obama’s decision to call Republicans “social Darwinists” — not because I thought it was wrong, but because I wondered how many voters would get his point. How many people know who Herbert Spencer was?
It turns out, however, that right-wing intellectuals are furious, because … well, it’s a bit puzzling. One complaint is that some 19th-century social Darwinists were racists; well, lots of 19th-century people in general were racists, and racism is not the core of the doctrine. The other is that modern conservatives don’t literally want to see poor people die; so?
As Jonathan Chait says in the linked piece above, the real defining characteristic of social Darwinism is the notion that harsh inequality is both necessary and right. And that’s absolutely what today’s right believes — which is the point all the faux outrage about the Darwinist label is meant to obscure.
Origin of Speciousness
I was unhappy with President Obama’s decision to call Republicans “social Darwinists” — not because I thought it was wrong, but because I wondered how many voters would get his point. How many people know who Herbert Spencer was?
It turns out, however, that right-wing intellectuals are furious, because … well, it’s a bit puzzling. One complaint is that some 19th-century social Darwinists were racists; well, lots of 19th-century people in general were racists, and racism is not the core of the doctrine. The other is that modern conservatives don’t literally want to see poor people die; so?
As Jonathan Chait says in the linked piece above, the real defining characteristic of social Darwinism is the notion that harsh inequality is both necessary and right. And that’s absolutely what today’s right believes — which is the point all the faux outrage about the Darwinist label is meant to obscure.
Obama vs. Romney: A Stark Choice
Linda HirshmanSunday, Apr 8, 2012 8:40 AM 10:36:55 CDT
Obama v. Romney: The philosopher candidates
Not since the New Deal have two candidates embodied the parties' philosophical divide as much as Romney and ObamaBy Linda Hirshman
It’s the ideology, stupid. Democrats think Americans should learn to play well with others. Republicans do not.
This should not be news. A revived conservative movement dating back to the Fifties has succeeded in locking the Republicans into the philosophy of possessive individualism that was always a strand in the American story. In turn, they have portrayed the Democrats as the collectivist Other. This election the Democrats, or at least their presidential candidate, Barack Obama, has decided he can no longer duck the fight. In a fiery speech to the American Society of Newspaper Editors last week Obama put it baldly: “This is not just another run of the mill political debate. I’ve said it’s the defining issue of our time and I believe it. What leaders in both parties have traditionally understood is that the [Democratic government programs] aren’t part of some scheme to redistribute wealth from one group to another. They are expressions of the fact that we are one nation.”
Republicans, Obama charged, are engaged in an attempt to impose a “radical” change of vision on the country – “thinly veiled social Darwinism” that allows us to “think only about ourselves” and not about “our fellow citizens with whom we share a community.”
After decades of Democratic evasion, from Mike Dukakis’ “it’s about competence” to the new Democrats’ “reinventing government,” Obama chose a great moment to join the fight between individualism and community at last. This cannot be happenstance. The Republicans have chosen the perfect symbol for the Democrats to confront – the stunningly wealthy scion of a privileged family who made his fortune in the most individualistic of all capitalist enterprises – leveraged buyouts.
Mitt Romney lost no time in taking up the ideological challenge. Just days after Obama laid down the gauntlet, Romney agreed that this was the philosophy election: “This November, we will face a defining decision. Our choice will not be one of party or personality. This election will be about principle.”
And the principle is that “if you worked hard, and took some risks, that there was the opportunity to build a better life for your family and for the next generation.”
Let the great race begin: “Instead of picking winners and losers with taxpayer dollars, I will make sure that every entrepreneur gets a fair shot and that every business plays by the same rules. I will create an environment where our businesses and workers can compete and win.” And not only will the strongest Americans win, the race is wide open: “I will welcome the best and the brightest to our shores.”
Americans should be grateful to both candidates for finally giving us the choice (not the echo) as the original conservative revivalist, Barry Goldwater, presciently put it so long ago. Smart men, they both understand that it’s ideology all the way down. As we philosophers have always known (I love saying that in an election piece), political contests are ultimately about what each side counts as evidence and what the evidence tells them about what it means to be a good person.
What can people know? The Democratic vision recognizes that people can feel the pain of others. “Who are these Americans?” Obama asks, invoking that capacity. “Many are someone’s grandparents who, without Medicaid, won’t be able to afford nursing home care without Medicaid. Many are poor children. Some are middle-class families who have children with autism or Down’s Syndrome. Some are kids with disabilities so severe that they require 24-hour care. These are the people who count on Medicaid.” The Republican vision does not see pain; it sees only “success.” Romney’s characters are “risk takers,” “winners” (and “losers”) in “races” won, presumably, by the fittest, who then raise up . . . their families.
People who can see the pain of others and the risks that cannot be controlled by individual effort make one kind of politics. People who see success and think that people are responsible for protecting themselves and their families make a different kind of politics.
When the debate that Obama and Romney have revived started in western politics centuries ago people had not made their own politics for a long time. They lived under the hereditary rule of various kings and princes. So they argued mostly about what kinds of societies they should make, being human. But Americans have been governing themselves for more than two hundred years. It is an unchallenged assumption of American politics that Americans are basically the good guys. So the debate is shaping up over what kind of societies Americans do make, being American. Barack Obama called it a matter of patriotism, not human nature.
From 2012, Obama has the better argument, as a society of some empathy and collective risk management and opportunity creation has been the norm at least since the Progressive movement of the early twentieth century and certainly since Franklin Roosevelt’s New Deal in 1933. Predictably, then Obama calls the current Republican ideological crusade “an attempt to impose a radical vision on our country.” Not surprisingly, Romney harks back to a mistier America, where his “grandfather” pulled himself up by his bootstraps and started the Romney family success drama.
To the Republican ideology, American politics since the New Deal at least is a usurpation, not a genuine expression of American character. Not coincidentally, constitutional lawyers around the country are holding their collective breath waiting to see if an ideologically conservative majority of the Supreme Court will reverse the health care law and roll back the understanding of the constitution that made the New Deal possible as a first step in restoring the old order.
The election is the second.
New Deal analogies are rampant these days. This is not hyperbole. Not since the election of 1932 has the direction of America’s future been so starkly contested. Let the wild rumpus begin.
Obama v. Romney: The philosopher candidates
Not since the New Deal have two candidates embodied the parties' philosophical divide as much as Romney and ObamaBy Linda Hirshman
It’s the ideology, stupid. Democrats think Americans should learn to play well with others. Republicans do not.
This should not be news. A revived conservative movement dating back to the Fifties has succeeded in locking the Republicans into the philosophy of possessive individualism that was always a strand in the American story. In turn, they have portrayed the Democrats as the collectivist Other. This election the Democrats, or at least their presidential candidate, Barack Obama, has decided he can no longer duck the fight. In a fiery speech to the American Society of Newspaper Editors last week Obama put it baldly: “This is not just another run of the mill political debate. I’ve said it’s the defining issue of our time and I believe it. What leaders in both parties have traditionally understood is that the [Democratic government programs] aren’t part of some scheme to redistribute wealth from one group to another. They are expressions of the fact that we are one nation.”
Republicans, Obama charged, are engaged in an attempt to impose a “radical” change of vision on the country – “thinly veiled social Darwinism” that allows us to “think only about ourselves” and not about “our fellow citizens with whom we share a community.”
After decades of Democratic evasion, from Mike Dukakis’ “it’s about competence” to the new Democrats’ “reinventing government,” Obama chose a great moment to join the fight between individualism and community at last. This cannot be happenstance. The Republicans have chosen the perfect symbol for the Democrats to confront – the stunningly wealthy scion of a privileged family who made his fortune in the most individualistic of all capitalist enterprises – leveraged buyouts.
Mitt Romney lost no time in taking up the ideological challenge. Just days after Obama laid down the gauntlet, Romney agreed that this was the philosophy election: “This November, we will face a defining decision. Our choice will not be one of party or personality. This election will be about principle.”
And the principle is that “if you worked hard, and took some risks, that there was the opportunity to build a better life for your family and for the next generation.”
Let the great race begin: “Instead of picking winners and losers with taxpayer dollars, I will make sure that every entrepreneur gets a fair shot and that every business plays by the same rules. I will create an environment where our businesses and workers can compete and win.” And not only will the strongest Americans win, the race is wide open: “I will welcome the best and the brightest to our shores.”
Americans should be grateful to both candidates for finally giving us the choice (not the echo) as the original conservative revivalist, Barry Goldwater, presciently put it so long ago. Smart men, they both understand that it’s ideology all the way down. As we philosophers have always known (I love saying that in an election piece), political contests are ultimately about what each side counts as evidence and what the evidence tells them about what it means to be a good person.
What can people know? The Democratic vision recognizes that people can feel the pain of others. “Who are these Americans?” Obama asks, invoking that capacity. “Many are someone’s grandparents who, without Medicaid, won’t be able to afford nursing home care without Medicaid. Many are poor children. Some are middle-class families who have children with autism or Down’s Syndrome. Some are kids with disabilities so severe that they require 24-hour care. These are the people who count on Medicaid.” The Republican vision does not see pain; it sees only “success.” Romney’s characters are “risk takers,” “winners” (and “losers”) in “races” won, presumably, by the fittest, who then raise up . . . their families.
People who can see the pain of others and the risks that cannot be controlled by individual effort make one kind of politics. People who see success and think that people are responsible for protecting themselves and their families make a different kind of politics.
When the debate that Obama and Romney have revived started in western politics centuries ago people had not made their own politics for a long time. They lived under the hereditary rule of various kings and princes. So they argued mostly about what kinds of societies they should make, being human. But Americans have been governing themselves for more than two hundred years. It is an unchallenged assumption of American politics that Americans are basically the good guys. So the debate is shaping up over what kind of societies Americans do make, being American. Barack Obama called it a matter of patriotism, not human nature.
From 2012, Obama has the better argument, as a society of some empathy and collective risk management and opportunity creation has been the norm at least since the Progressive movement of the early twentieth century and certainly since Franklin Roosevelt’s New Deal in 1933. Predictably, then Obama calls the current Republican ideological crusade “an attempt to impose a radical vision on our country.” Not surprisingly, Romney harks back to a mistier America, where his “grandfather” pulled himself up by his bootstraps and started the Romney family success drama.
To the Republican ideology, American politics since the New Deal at least is a usurpation, not a genuine expression of American character. Not coincidentally, constitutional lawyers around the country are holding their collective breath waiting to see if an ideologically conservative majority of the Supreme Court will reverse the health care law and roll back the understanding of the constitution that made the New Deal possible as a first step in restoring the old order.
The election is the second.
New Deal analogies are rampant these days. This is not hyperbole. Not since the election of 1932 has the direction of America’s future been so starkly contested. Let the wild rumpus begin.
Saturday, April 7, 2012
The Babe Ruth Biography (2)
Finally, I think Babe Ruth would have been fun to be around, but I wouldn't want him as a friend based on what I read in the Leigh Montville biography. He would be too much trouble. His take on bowling late in life is interesting. He wasn't interested in his score; all he cared about was simply knocking down pins. He would bowl and bowl and bowl just to add up his total pinfall. Very strange, if you ask me!
A Universe from Nothing? (2)
By Lawrence M. Krauss
April 1, 2012
The illusion of purpose and design is perhaps the most pervasive illusion about nature that science has to confront on a daily basis. Everywhere we look, it appears that the world was designed so that we could flourish.
The position of the Earth around the sun, the presence of organic materials and water and a warm climate — all make life on our planet possible. Yet, with perhaps 100 billion solar systems in our galaxy alone, with ubiquitous water, carbon and hydrogen, it isn't surprising that these conditions would arise somewhere. And as to the diversity of life on Earth — as Darwin described more than 150 years ago and experiments ever since have validated — natural selection in evolving life forms can establish both diversity and order without any governing plan.
As a cosmologist, a scientist who studies the origin and evolution of the universe, I am painfully aware that our illusions nonetheless reflect a deep human need to assume that the existence of the Earth, of life and of the universe and the laws that govern it require something more profound. For many, to live in a universe that may have no purpose, and no creator, is unthinkable.
But science has taught us to think the unthinkable. Because when nature is the guide — rather than a priori prejudices, hopes, fears or desires — we are forced out of our comfort zone. One by one, pillars of classical logic have fallen by the wayside as science progressed in the 20th century, from Einstein's realization that measurements of space and time were not absolute but observer-dependent, to quantum mechanics, which not only put fundamental limits on what we can empirically know but also demonstrated that elementary particles and the atoms they form are doing a million seemingly impossible things at once.
And so it is that the 21st century has brought new revolutions and new revelations on a cosmic scale. Our picture of the universe has probably changed more in the lifetime of an octogenarian today than in all of human history. Eighty-seven years ago, as far as we knew, the universe consisted of a single galaxy, our Milky Way, surrounded by an eternal, static, empty void. Now we know that there are more than 100 billion galaxies in the observable universe, which began with the Big Bang 13.7 billion years ago. In its earliest moments, everything we now see as our universe — and much more — was contained in a volume smaller than the size of a single atom.
And so we continue to be surprised. We are like the early mapmakers redrawing the picture of the globe even as new continents were discovered. And just as those mapmakers confronted the realization that the Earth was not flat, we must confront facts that change what have seemed to be basic and fundamental concepts. Even our idea of nothingness has been altered.
We now know that most of the energy in the observable universe can be found not within galaxies but outside them, in otherwise empty space, which, for reasons we still cannot fathom, "weighs" something. But the use of the word "weight" is perhaps misleading because the energy of empty space is gravitationally repulsive. It pushes distant galaxies away from us at an ever-faster rate. Eventually they will recede faster than light and will be unobservable.
This has changed our vision of the future, which is now far bleaker. The longer we wait, the less of the universe we will be able to see. In hundreds of billions of years astronomers on some distant planet circling a distant star (Earth and our sun will be long gone) will observe the cosmos and find it much like our flawed vision at the turn of the last century: a single galaxy immersed in a seemingly endless dark, empty, static universe.
Out of this radically new image of the universe at large scale have also come new ideas about physics at a small scale. The Large Hadron Collider has given tantalizing hints that the origin of mass, and therefore of all that we can see, is a kind of cosmic accident. Experiments in the collider bolster evidence of the existence of the "Higgs field," which apparently just happened to form throughout space in our universe; it is only because all elementary particles interact with this field that they have the mass we observe today.
Most surprising of all, combining the ideas of general relativity and quantum mechanics, we can understand how it is possible that the entire universe, matter, radiation and even space itself could arise spontaneously out of nothing, without explicit divine intervention. Quantum mechanics' Heisenberg uncertainty principle expands what can possibly occur undetected in otherwise empty space. If gravity too is governed by quantum mechanics, then even whole new universes can spontaneously appear and disappear, which means our own universe may not be unique but instead part of a "multiverse."
As particle physics revolutionizes the concepts of "something" (elementary particles and the forces that bind them) and "nothing" (the dynamics of empty space or even the absence of space), the famous question, "Why is there something rather than nothing?" is also revolutionized. Even the very laws of physics we depend on may be a cosmic accident, with different laws in different universes, which further alters how we might connect something with nothing. Asking why we live in a universe of something rather than nothing may be no more meaningful than asking why some flowers are red and others blue.
Perhaps most remarkable of all, not only is it now plausible, in a scientific sense, that our universe came from nothing, if we ask what properties a universe created from nothing would have, it appears that these properties resemble precisely the universe we live in.
Does all of this prove that our universe and the laws that govern it arose spontaneously without divine guidance or purpose? No, but it means it is possible.
And that possibility need not imply that our own lives are devoid of meaning. Instead of divine purpose, the meaning in our lives can arise from what we make of ourselves, from our relationships and our institutions, from the achievements of the human mind.
Imagining living in a universe without purpose may prepare us to better face reality head on. I cannot see that this is such a bad thing. Living in a strange and remarkable universe that is the way it is, independent of our desires and hopes, is far more satisfying for me than living in a fairy-tale universe invented to justify our existence.
Lawrence M. Krauss is director of the Origins Project at Arizona State University. His newest book is "A Universe From Nothing."
Copyright © 2012, Los Angeles Times
April 1, 2012
The illusion of purpose and design is perhaps the most pervasive illusion about nature that science has to confront on a daily basis. Everywhere we look, it appears that the world was designed so that we could flourish.
The position of the Earth around the sun, the presence of organic materials and water and a warm climate — all make life on our planet possible. Yet, with perhaps 100 billion solar systems in our galaxy alone, with ubiquitous water, carbon and hydrogen, it isn't surprising that these conditions would arise somewhere. And as to the diversity of life on Earth — as Darwin described more than 150 years ago and experiments ever since have validated — natural selection in evolving life forms can establish both diversity and order without any governing plan.
As a cosmologist, a scientist who studies the origin and evolution of the universe, I am painfully aware that our illusions nonetheless reflect a deep human need to assume that the existence of the Earth, of life and of the universe and the laws that govern it require something more profound. For many, to live in a universe that may have no purpose, and no creator, is unthinkable.
But science has taught us to think the unthinkable. Because when nature is the guide — rather than a priori prejudices, hopes, fears or desires — we are forced out of our comfort zone. One by one, pillars of classical logic have fallen by the wayside as science progressed in the 20th century, from Einstein's realization that measurements of space and time were not absolute but observer-dependent, to quantum mechanics, which not only put fundamental limits on what we can empirically know but also demonstrated that elementary particles and the atoms they form are doing a million seemingly impossible things at once.
And so it is that the 21st century has brought new revolutions and new revelations on a cosmic scale. Our picture of the universe has probably changed more in the lifetime of an octogenarian today than in all of human history. Eighty-seven years ago, as far as we knew, the universe consisted of a single galaxy, our Milky Way, surrounded by an eternal, static, empty void. Now we know that there are more than 100 billion galaxies in the observable universe, which began with the Big Bang 13.7 billion years ago. In its earliest moments, everything we now see as our universe — and much more — was contained in a volume smaller than the size of a single atom.
And so we continue to be surprised. We are like the early mapmakers redrawing the picture of the globe even as new continents were discovered. And just as those mapmakers confronted the realization that the Earth was not flat, we must confront facts that change what have seemed to be basic and fundamental concepts. Even our idea of nothingness has been altered.
We now know that most of the energy in the observable universe can be found not within galaxies but outside them, in otherwise empty space, which, for reasons we still cannot fathom, "weighs" something. But the use of the word "weight" is perhaps misleading because the energy of empty space is gravitationally repulsive. It pushes distant galaxies away from us at an ever-faster rate. Eventually they will recede faster than light and will be unobservable.
This has changed our vision of the future, which is now far bleaker. The longer we wait, the less of the universe we will be able to see. In hundreds of billions of years astronomers on some distant planet circling a distant star (Earth and our sun will be long gone) will observe the cosmos and find it much like our flawed vision at the turn of the last century: a single galaxy immersed in a seemingly endless dark, empty, static universe.
Out of this radically new image of the universe at large scale have also come new ideas about physics at a small scale. The Large Hadron Collider has given tantalizing hints that the origin of mass, and therefore of all that we can see, is a kind of cosmic accident. Experiments in the collider bolster evidence of the existence of the "Higgs field," which apparently just happened to form throughout space in our universe; it is only because all elementary particles interact with this field that they have the mass we observe today.
Most surprising of all, combining the ideas of general relativity and quantum mechanics, we can understand how it is possible that the entire universe, matter, radiation and even space itself could arise spontaneously out of nothing, without explicit divine intervention. Quantum mechanics' Heisenberg uncertainty principle expands what can possibly occur undetected in otherwise empty space. If gravity too is governed by quantum mechanics, then even whole new universes can spontaneously appear and disappear, which means our own universe may not be unique but instead part of a "multiverse."
As particle physics revolutionizes the concepts of "something" (elementary particles and the forces that bind them) and "nothing" (the dynamics of empty space or even the absence of space), the famous question, "Why is there something rather than nothing?" is also revolutionized. Even the very laws of physics we depend on may be a cosmic accident, with different laws in different universes, which further alters how we might connect something with nothing. Asking why we live in a universe of something rather than nothing may be no more meaningful than asking why some flowers are red and others blue.
Perhaps most remarkable of all, not only is it now plausible, in a scientific sense, that our universe came from nothing, if we ask what properties a universe created from nothing would have, it appears that these properties resemble precisely the universe we live in.
Does all of this prove that our universe and the laws that govern it arose spontaneously without divine guidance or purpose? No, but it means it is possible.
And that possibility need not imply that our own lives are devoid of meaning. Instead of divine purpose, the meaning in our lives can arise from what we make of ourselves, from our relationships and our institutions, from the achievements of the human mind.
Imagining living in a universe without purpose may prepare us to better face reality head on. I cannot see that this is such a bad thing. Living in a strange and remarkable universe that is the way it is, independent of our desires and hopes, is far more satisfying for me than living in a fairy-tale universe invented to justify our existence.
Lawrence M. Krauss is director of the Origins Project at Arizona State University. His newest book is "A Universe From Nothing."
Copyright © 2012, Los Angeles Times
Friday, April 6, 2012
The Social Darwin Republicans
by Jonathan Chair
Maybe make it clear that "social Darwinism" doesn't imply agreement with Darwin.
Today at 4:55 PM 149CommentsIt’s Okay — Call Republicans Social DarwinistsBy Jonathan Chair/ Three days after President Obama used the term “social Darwinism,” the phrase continues to rankle conservatives. David Brooks, Geoffrey Norman and Mitt Romney adviser Greg Mankiw all find the phrase (which I defended) to be a vicious smear.
Part of the problem here is that both Brooks and Mankiw have different views of what Social Darwinism means. Mankiw defines it as a belief “that the strongest or fittest should survive and flourish in society, while the weak and unfit should be allowed to die.” Brooks describes it as “a 19th-century philosophy that held, in part, that Aryans and Northern Europeans are racially superior to brown and Mediterranean peoples.”
Neither definition describes the philosophy of Paul Ryan and the Republican budget. But neither really captures the meaning of the term “social Darwinist,” either. I managed to track down and look over my old copy of Richard Hofstadter’s “Social Darwinism in American Thought,” and it describes a fairly wide range of right-wing thought. But the main guiding principle is a defense of the free market as a moral arbiter, rather than merely a tool for creating wealth. Just as natural selection allows better-adapted species to thrive and poorly adapted ones to die out, the free market rewards talent and hard work and punishes laziness or lack of talent, in a perfect or near-perfect way.
He quotes William Graham Sumner, who wrote, “'the strong’ and ‘the weak’ are terms which admit of no definition unless they are made equivalent to the industrious and the idle, the frugal and the extravagant.” Conservatives have been echoing that logic repeatedly the last few years. And their embrace of health care as an earned privilege rather than a right actually comes perilously close to endorsing the more radical versions of social Darwinism.
Now, it is true that some social Darwinists took the Darwinian model to its full, literal implications. Others took the idea and applied it in racist ways, and to relations between nations. But not all of them did, and this was not the essence of their belief system. The essence was a more figurative translation of the principles of natural selection onto the workings of the marketplace, justifying it as a system that rewarded virtue and punished vice. That principle is pretty closely echoed by Mankiw, who writes:
People should get what they deserve. A person who contributes more to society deserves a higher income that reflects those greater contributions. Society permits him that higher income not just to incentivize him, as it does according to utilitarian theory, but because that income is rightfully his.
Mankiw objects that I should use this quote to describe his views, because he also allows for the possibility of some transfers from rich to poor. But of course, many of the social Darwinists endorsed charity for the poor, which is obviously incompatible with the belief that the poor should die off for the good of society.
As I wrote before, “social Darwinism” is a contested term. In its most important ways I think it describes the philosophy of the current Republican Party pretty well.
Maybe make it clear that "social Darwinism" doesn't imply agreement with Darwin.
Today at 4:55 PM 149CommentsIt’s Okay — Call Republicans Social DarwinistsBy Jonathan Chair/ Three days after President Obama used the term “social Darwinism,” the phrase continues to rankle conservatives. David Brooks, Geoffrey Norman and Mitt Romney adviser Greg Mankiw all find the phrase (which I defended) to be a vicious smear.
Part of the problem here is that both Brooks and Mankiw have different views of what Social Darwinism means. Mankiw defines it as a belief “that the strongest or fittest should survive and flourish in society, while the weak and unfit should be allowed to die.” Brooks describes it as “a 19th-century philosophy that held, in part, that Aryans and Northern Europeans are racially superior to brown and Mediterranean peoples.”
Neither definition describes the philosophy of Paul Ryan and the Republican budget. But neither really captures the meaning of the term “social Darwinist,” either. I managed to track down and look over my old copy of Richard Hofstadter’s “Social Darwinism in American Thought,” and it describes a fairly wide range of right-wing thought. But the main guiding principle is a defense of the free market as a moral arbiter, rather than merely a tool for creating wealth. Just as natural selection allows better-adapted species to thrive and poorly adapted ones to die out, the free market rewards talent and hard work and punishes laziness or lack of talent, in a perfect or near-perfect way.
He quotes William Graham Sumner, who wrote, “'the strong’ and ‘the weak’ are terms which admit of no definition unless they are made equivalent to the industrious and the idle, the frugal and the extravagant.” Conservatives have been echoing that logic repeatedly the last few years. And their embrace of health care as an earned privilege rather than a right actually comes perilously close to endorsing the more radical versions of social Darwinism.
Now, it is true that some social Darwinists took the Darwinian model to its full, literal implications. Others took the idea and applied it in racist ways, and to relations between nations. But not all of them did, and this was not the essence of their belief system. The essence was a more figurative translation of the principles of natural selection onto the workings of the marketplace, justifying it as a system that rewarded virtue and punished vice. That principle is pretty closely echoed by Mankiw, who writes:
People should get what they deserve. A person who contributes more to society deserves a higher income that reflects those greater contributions. Society permits him that higher income not just to incentivize him, as it does according to utilitarian theory, but because that income is rightfully his.
Mankiw objects that I should use this quote to describe his views, because he also allows for the possibility of some transfers from rich to poor. But of course, many of the social Darwinists endorsed charity for the poor, which is obviously incompatible with the belief that the poor should die off for the good of society.
As I wrote before, “social Darwinism” is a contested term. In its most important ways I think it describes the philosophy of the current Republican Party pretty well.
The Babe Ruth Biography
Leigh Montville – The Big Bam
This is the definitive biography of Babe Ruth for our time.
Little is known of the George Herman Ruth, Jr.’s early years. The main thing is that his home life must not have been good, and he was taken to live in the St. Mary’s Industrial School for Boys at the age of 7. Why did his parents take him to a place for orphans and wayward boys? According to this book, we don’t know.
Luckily for the Babe, he was sent to a place where sports was big. He had ample opportunity to play baseball, and play and play and play he did.
Early in life at this school for boys, his racial heritage was questioned. Did Babe Ruth have a black heritage? Was he a black man? This is the biggest revelation in this book. I had no idea this was an issue in his life. The question, if it is a legitimate question, has never been answered conclusively.
The author uses the word “fog” where little is known about Ruth’s life. His life, particularly his early life, is like a jigsaw puzzle with many pieces missing. Those missing pieces will likely always be missing. The fog rolls in at many points in Ruth’s life.
He started as a pitcher, was a good pitcher, played some outfield for the Red Sox, but when he was traded to the Yankees in 1920 in the most famous baseball trade in history (for the Red Sox “The Curse of the Bambino), he became a full-time right fielder.
There was his glorious career with the Yankees. He was a great baseball player, maybe the best to ever play the game, certainly the most famous baseball player of all time. Yet he was less than a sterling human being. He was hyperactive. According to this book, he had to be active all the time. He couldn’t sit still. He was not a good husband and father. He was estranged from his wife Helen when she perished in a fire in 1929. He was rarely around his daughter Dorothy. He married a second time to a lady named Claire, to whom he was married when he died in 1948. He was much closer to Claire’s daughter than his own natural daughter. Great baseball player no doubt; less than a great human being.
After he retired from baseball after the 1935 season, Ruth tried his best to get a managing job. He was briefly the first base coach of the Dodgers, but a managerial job never came his way. Nobody in baseball wanted to hire the great Babe Ruth as manager. Was he not smart enough? Was he not stable enough? Was it racial? I have never heard this one before.
He was considered a big kid, a spoiled kid all of his life. Because he was Babe Ruth, he played by his own rules and got away with it. At least he was open with his money unlike Joe DiMaggio. Yes, he was a spoiled brat but more likeable than Joe D. P. 120
All of his life there were questions about his racial heritage. Before reading this book, I had no idea this was the case. This suspicion might have kept him from getting a managerial job, for it is obvious he would have taken whatever was offered him.
“The question of race would linger. Was the Babe, by legal definition, a black man? He had the bad words for as long as he played. He had been handed the wrongful stereotype that would be attached to the black athlete---the natural talent, abilities transmitted by the touch of God, not acquired through industry and intelligence. He never had a chance to manage a team. So many of the pieces fit. If not a black man, he had been touched by the prejudices against the black man.
The truth? The fog settled in for one last time.” P. 365
He did not work a regular job after retiring from baseball. He played a lot of golf, hunted, fished, and even took up bowling!
Babe Ruth died of cancer on August 16, 1948, age 53, passing away in his sleep. He died on the same day as Elvis Presley.
“The fascination with career and life continues. He is a bombastic, sloppy hero from our bombastic, sloppy history, origins undetermined, a folk tale of American success. His moon face is as recognizable today as it was when he stared out at Tom Zachary on a certain September afternoon in 1927.” P. 367
This is the definitive biography of Babe Ruth for our time.
Little is known of the George Herman Ruth, Jr.’s early years. The main thing is that his home life must not have been good, and he was taken to live in the St. Mary’s Industrial School for Boys at the age of 7. Why did his parents take him to a place for orphans and wayward boys? According to this book, we don’t know.
Luckily for the Babe, he was sent to a place where sports was big. He had ample opportunity to play baseball, and play and play and play he did.
Early in life at this school for boys, his racial heritage was questioned. Did Babe Ruth have a black heritage? Was he a black man? This is the biggest revelation in this book. I had no idea this was an issue in his life. The question, if it is a legitimate question, has never been answered conclusively.
The author uses the word “fog” where little is known about Ruth’s life. His life, particularly his early life, is like a jigsaw puzzle with many pieces missing. Those missing pieces will likely always be missing. The fog rolls in at many points in Ruth’s life.
He started as a pitcher, was a good pitcher, played some outfield for the Red Sox, but when he was traded to the Yankees in 1920 in the most famous baseball trade in history (for the Red Sox “The Curse of the Bambino), he became a full-time right fielder.
There was his glorious career with the Yankees. He was a great baseball player, maybe the best to ever play the game, certainly the most famous baseball player of all time. Yet he was less than a sterling human being. He was hyperactive. According to this book, he had to be active all the time. He couldn’t sit still. He was not a good husband and father. He was estranged from his wife Helen when she perished in a fire in 1929. He was rarely around his daughter Dorothy. He married a second time to a lady named Claire, to whom he was married when he died in 1948. He was much closer to Claire’s daughter than his own natural daughter. Great baseball player no doubt; less than a great human being.
After he retired from baseball after the 1935 season, Ruth tried his best to get a managing job. He was briefly the first base coach of the Dodgers, but a managerial job never came his way. Nobody in baseball wanted to hire the great Babe Ruth as manager. Was he not smart enough? Was he not stable enough? Was it racial? I have never heard this one before.
He was considered a big kid, a spoiled kid all of his life. Because he was Babe Ruth, he played by his own rules and got away with it. At least he was open with his money unlike Joe DiMaggio. Yes, he was a spoiled brat but more likeable than Joe D. P. 120
All of his life there were questions about his racial heritage. Before reading this book, I had no idea this was the case. This suspicion might have kept him from getting a managerial job, for it is obvious he would have taken whatever was offered him.
“The question of race would linger. Was the Babe, by legal definition, a black man? He had the bad words for as long as he played. He had been handed the wrongful stereotype that would be attached to the black athlete---the natural talent, abilities transmitted by the touch of God, not acquired through industry and intelligence. He never had a chance to manage a team. So many of the pieces fit. If not a black man, he had been touched by the prejudices against the black man.
The truth? The fog settled in for one last time.” P. 365
He did not work a regular job after retiring from baseball. He played a lot of golf, hunted, fished, and even took up bowling!
Babe Ruth died of cancer on August 16, 1948, age 53, passing away in his sleep. He died on the same day as Elvis Presley.
“The fascination with career and life continues. He is a bombastic, sloppy hero from our bombastic, sloppy history, origins undetermined, a folk tale of American success. His moon face is as recognizable today as it was when he stared out at Tom Zachary on a certain September afternoon in 1927.” P. 367
Sunday, April 1, 2012
A Quantum Theory of Mitt Romney
A Quantum Theory of Mitt Romney
By DAVID JAVERBAUM
Published: March 31, 2012
THE recent remark by Mitt Romney’s senior adviser Eric Fehrnstrom that upon clinching the Republican nomination Mr. Romney could change his political views “like an Etch A Sketch” has already become notorious. The comment seemed all too apt, an apparent admission by a campaign insider of two widely held suspicions about Mitt Romney: that he is a) utterly devoid of any ideological convictions and b) filled with aluminum powder.
The famous “Schrödinger’s candidate” scenario. For as long as Mitt Romney remains in this box, he is both a moderate and a conservative.
may have been unfortunate, but Mr. Fehrnstrom’s impulse to analogize is understandable. Metaphors like these, inexact as they are, are the only way the layman can begin to grasp the strange phantom world that underpins the very fabric of not only the Romney campaign but also of Mitt Romney in general. For we have entered the age of quantum politics; and Mitt Romney is the first quantum politician.
A bit of context. Before Mitt Romney, those seeking the presidency operated under the laws of so-called classical politics, laws still followed by traditional campaigners like Newt Gingrich. Under these Newtonian principles, a candidate’s position on an issue tends to stay at rest until an outside force — the Tea Party, say, or a six-figure credit line at Tiffany — compels him to alter his stance, at a speed commensurate with the size of the force (usually large) and in inverse proportion to the depth of his beliefs (invariably negligible). This alteration, framed as a positive by the candidate, then provokes an equal but opposite reaction among his rivals.
But the Romney candidacy represents literally a quantum leap forward. It is governed by rules that are bizarre and appear to go against everyday experience and common sense. To be honest, even people like Mr. Fehrnstrom who are experts in Mitt Romney’s reality, or “Romneality,” seem bewildered by its implications; and any person who tells you he or she truly “understands” Mitt Romney is either lying or a corporation.
Nevertheless, close and repeated study of his campaign in real-world situations has yielded a standard model that has proved eerily accurate in predicting Mitt Romney’s behavior in debate after debate, speech after speech, awkward look-at-me-I’m-a-regular-guy moment after awkward look-at-me-I’m-a-regular-guy moment, and every other event in his face-time continuum.
The basic concepts behind this model are:
Complementarity. In much the same way that light is both a particle and a wave, Mitt Romney is both a moderate and a conservative, depending on the situation (Fig. 1). It is not that he is one or the other; it is not that he is one and then the other. He is both at the same time.
Probability. Mitt Romney’s political viewpoints can be expressed only in terms of likelihood, not certainty. While some views are obviously far less likely than others, no view can be thought of as absolutely impossible. Thus, for instance, there is at any given moment a nonzero chance that Mitt Romney supports child slavery.
Uncertainty. Frustrating as it may be, the rules of quantum campaigning dictate that no human being can ever simultaneously know both what Mitt Romney’s current position is and where that position will be at some future date. This is known as the “principle uncertainty principle.”
Entanglement. It doesn’t matter whether it’s a proton, neutron or Mormon: the act of observing cannot be separated from the outcome of the observation. By asking Mitt Romney how he feels about an issue, you unavoidably affect how he feels about it. More precisely, Mitt Romney will feel every possible way about an issue until the moment he is asked about it, at which point the many feelings decohere into the single answer most likely to please the asker.
Noncausality. The Romney campaign often violates, and even reverses, the law of cause and effect. For example, ordinarily the cause of getting the most votes leads to the effect of being considered the most electable candidate. But in the case of Mitt Romney, the cause of being considered the most electable candidate actually produces the effect of getting the most votes.
Duality. Many conservatives believe the existence of Mitt Romney allows for the possibility of the spontaneous creation of an “anti-Romney” (Fig. 2) that leaps into existence and annihilates Mitt Romney. (However, the science behind this is somewhat suspect, as it is financed by Rick Santorum, for whom science itself is suspect.)
What does all this bode for the general election? By this point it won’t surprise you to learn the answer is, “We don’t know.” Because according to the latest theories, the “Mitt Romney” who seems poised to be the Republican nominee is but one of countless Mitt Romneys, each occupying his own cosmos, each supporting a different platform, each being compared to a different beloved children’s toy but all of them equally real, all of them equally valid and all of them running for president at the same time, in their own alternative Romnealities, somewhere in the vast Romniverse.
And all of them losing to Barack Obama.
By DAVID JAVERBAUM
Published: March 31, 2012
THE recent remark by Mitt Romney’s senior adviser Eric Fehrnstrom that upon clinching the Republican nomination Mr. Romney could change his political views “like an Etch A Sketch” has already become notorious. The comment seemed all too apt, an apparent admission by a campaign insider of two widely held suspicions about Mitt Romney: that he is a) utterly devoid of any ideological convictions and b) filled with aluminum powder.
The famous “Schrödinger’s candidate” scenario. For as long as Mitt Romney remains in this box, he is both a moderate and a conservative.
may have been unfortunate, but Mr. Fehrnstrom’s impulse to analogize is understandable. Metaphors like these, inexact as they are, are the only way the layman can begin to grasp the strange phantom world that underpins the very fabric of not only the Romney campaign but also of Mitt Romney in general. For we have entered the age of quantum politics; and Mitt Romney is the first quantum politician.
A bit of context. Before Mitt Romney, those seeking the presidency operated under the laws of so-called classical politics, laws still followed by traditional campaigners like Newt Gingrich. Under these Newtonian principles, a candidate’s position on an issue tends to stay at rest until an outside force — the Tea Party, say, or a six-figure credit line at Tiffany — compels him to alter his stance, at a speed commensurate with the size of the force (usually large) and in inverse proportion to the depth of his beliefs (invariably negligible). This alteration, framed as a positive by the candidate, then provokes an equal but opposite reaction among his rivals.
But the Romney candidacy represents literally a quantum leap forward. It is governed by rules that are bizarre and appear to go against everyday experience and common sense. To be honest, even people like Mr. Fehrnstrom who are experts in Mitt Romney’s reality, or “Romneality,” seem bewildered by its implications; and any person who tells you he or she truly “understands” Mitt Romney is either lying or a corporation.
Nevertheless, close and repeated study of his campaign in real-world situations has yielded a standard model that has proved eerily accurate in predicting Mitt Romney’s behavior in debate after debate, speech after speech, awkward look-at-me-I’m-a-regular-guy moment after awkward look-at-me-I’m-a-regular-guy moment, and every other event in his face-time continuum.
The basic concepts behind this model are:
Complementarity. In much the same way that light is both a particle and a wave, Mitt Romney is both a moderate and a conservative, depending on the situation (Fig. 1). It is not that he is one or the other; it is not that he is one and then the other. He is both at the same time.
Probability. Mitt Romney’s political viewpoints can be expressed only in terms of likelihood, not certainty. While some views are obviously far less likely than others, no view can be thought of as absolutely impossible. Thus, for instance, there is at any given moment a nonzero chance that Mitt Romney supports child slavery.
Uncertainty. Frustrating as it may be, the rules of quantum campaigning dictate that no human being can ever simultaneously know both what Mitt Romney’s current position is and where that position will be at some future date. This is known as the “principle uncertainty principle.”
Entanglement. It doesn’t matter whether it’s a proton, neutron or Mormon: the act of observing cannot be separated from the outcome of the observation. By asking Mitt Romney how he feels about an issue, you unavoidably affect how he feels about it. More precisely, Mitt Romney will feel every possible way about an issue until the moment he is asked about it, at which point the many feelings decohere into the single answer most likely to please the asker.
Noncausality. The Romney campaign often violates, and even reverses, the law of cause and effect. For example, ordinarily the cause of getting the most votes leads to the effect of being considered the most electable candidate. But in the case of Mitt Romney, the cause of being considered the most electable candidate actually produces the effect of getting the most votes.
Duality. Many conservatives believe the existence of Mitt Romney allows for the possibility of the spontaneous creation of an “anti-Romney” (Fig. 2) that leaps into existence and annihilates Mitt Romney. (However, the science behind this is somewhat suspect, as it is financed by Rick Santorum, for whom science itself is suspect.)
What does all this bode for the general election? By this point it won’t surprise you to learn the answer is, “We don’t know.” Because according to the latest theories, the “Mitt Romney” who seems poised to be the Republican nominee is but one of countless Mitt Romneys, each occupying his own cosmos, each supporting a different platform, each being compared to a different beloved children’s toy but all of them equally real, all of them equally valid and all of them running for president at the same time, in their own alternative Romnealities, somewhere in the vast Romniverse.
And all of them losing to Barack Obama.
Subscribe to:
Posts (Atom)