In
the immediate aftermath of World War II, a wide range of thinkers, both
secular and religious, struggled to make sense of the profound evil of
war, particularly Nazi Germany and the Holocaust. One such effort, “The
Authoritarian Personality” by Theodore Adorno and three co-authors,
opened up a whole new field of political psychology—initially a small
niche within the broader field of social psychology—which developed
fitfully over the years, but became an increasingly robust subject area
in 1980s and 90s, fleshing out a number of distinct areas of cognitive
processing in which liberals and conservatives differed from one
another. Liberal/conservative differences were not the sole concern of
this field, but they did appear repeatedly across a growing range of
different sorts of measures, including the inclination to justify the
existing social order, whatever it might be, an insight developed by
John Jost, starting in the 1990s, under the rubric of “system
justification theory.”
The
field of political psychology gained increased visibility in the 2000s
as conservative Republicans controlled the White House and Congress
simultaneously for the first time since the Great Depression, and took
the nation in an increasingly divisive direction. Most notably, John
Dean’s 2006 bestseller, “Conservatives Without Conscience,” popularized
two of the more striking developments of the 1980s and 90s, the
constructs of rightwing authoritarianism and social dominance
orientation. A few years before that, a purely academic paper,
“Political Conservatism as Motivated Social Cognition,” by Jost and
three other prominent researchers in the field, caused a brief spasm of
political reaction which led some in Congress to talk of defunding the
entire field.
But as the Bush era ended, and Barack Obama’s
rhetoric of transcending right/left differences captured the national
imagination, an echo of sentiment appeared in the field of political
psychology as well. Known as “moral foundations theory,” and most
closely associated with psychologist Jonathan Haidt, and popularized in
his book “The Righteous Mind,” it argued that a too-narrow focus on
concerns of fairness and care/harm avoidance had diminished researchers’
appreciation for the full range of moral concerns, especially a
particular subset of distinct concerns which conservatives appear to
value more than liberals do. In order to restore balance to the field,
researchers must broaden their horizons—and even, Haidt argued, engage
in affirmative action to recruit conservatives into the field of
political psychology. This was, in effect, an argument invoking liberal
values—fairness, inclusion, openness to new ideas, etc.—and using them
to criticize or even attack what was characterized as a liberal
orthodoxy, or even a church-like, close-minded tribal moral community.
advertisement
Yet,
to some, these arguments seemed to gloss over, or even just outright
dismiss a wide body of data, not dogma, from decades of previous
research. While people were willing to consider new information, and new
perspectives, there was a reluctance to throw out the baby with the
bathwater, as it were. In the most nitty-gritty sense, the question came
down to this: Was the rhetorical framing of the moral foundations
argument actually congruent with the detailed empirical findings in the
field? Or did it serve more to blur important distinctions that were
solidly grounded in rigorous observation?
Recently, a number of
studies have raised questions about moral foundations theory in
precisely these terms—are the moral foundations more congenial to
conservatives actually reflective of non-moral or even immoral
tendencies which have already been extensively studied? Late last year, a
paper co-authored by Jost—“Another Look At Moral Foundations
Theory”—built on these earlier studies to make the strongest case yet
along these lines. To gain a better understanding of the field as a
whole, moral foundations theory as a challenge within it, the problems
that theory is now confronting, and what sort of resolution—and new
frontiers—may lie ahead for the field, Salon spoke with John Jost. In
the end, he suggested, moral foundations theory and system justification
theory may end up looking surpsingly similar to one another, rather
than being radically at odds.
You’re most known for your
work developing system justification theory, followed by your broader
work on developing an integrated account of political ideology. You
recently co-authored a paper “Another Look at Moral Foundations Theory,”
which I want to focus on, but in order to do so coherently, I thought
it best to begin by first asking you about your own work, and that of
others you’ve helped integrate, before turning to moral foundations
theory generally, and this critical paper in particular.
So,
with that in mind as a game plan, could you briefly explain what system
justification theory is all about, how it was that you became
interested in the subject matter, and why others should be interested in
it as well.
When I was a graduate student in social
psychology at Yale back in the 1990’s I began to wonder about a set of
seemingly unrelated phenomena that were all counterintuitive in some way
and in need of explanation. So I asked: Why do people stay in abusive
relationships, why do women feel that they are entitled to lower
salaries than men, and why do African American children come to think
that white dolls are more attractive and desirable? Why do people blame
victims of injustice and why do victims of injustice sometimes blame
themselves? Why is it so difficult for unions and other organizations to
get people to stand up for themselves, and why do we find personal and
social change to be so difficult, even painful? Of course, not everyone
exhibits these patterns of behavior at all times, but many people do,
and it seemed to me that these phenomena were not well explained by
existing theories in social science.
And so it occurred to me that
there might be a common denominator—at the level of social
psychology—in these seemingly disparate situations. Perhaps human beings
are in some fairly subtle way prone to accept, defend, justify, and
rationalize existing social arrangements and to resist attempts to
change the status quo, however well-meaning those attempts may be. In
other words, we may be motivated, to varying degrees, to justify the
social systems on which we depend, to see them as relatively good, fair,
legitimate, desirable, and so on.
This did not strike me as
implausible, given that social psychologists had already demonstrated
that we are often motivated to defend and justify ourselves and the
social groups to which we belong. Most of us believe that we are better
drivers than the average person and more fair, too, and many of us
believe that our schools or sports teams or companies are better than
their rivals and competitors. Why should we not also want to believe
that the social, economic, and political institutions that are familiar
to us are, all things considered, better than the alternatives? To
believe otherwise is at least somewhat painful, insofar it would force
us to confront the possibility that our lives and those of others around
us may be subject to capriciousness, exploitation, discrimination,
injustice, and that things could be different, better—but they are not.
In
2003, a paper you co-authored, “Political Conservatism as Motivated
Social Cognition” caused quite a stir politically—there were even brief
rumblings in Congress to cut off all research funding, not just for you,
but for an entire broad field of research, though you managed to quell
those rumblings in a subsequent Washington Post op-ed. That paper might
well be called the tip of the iceberg of a whole body of work you’ve
helped draw together, and continued to work on since then. So, first of
all, what was that paper about?
We wanted to understand
the relationship, if any, between psychological conservatism—the mental
forces that contribute to resistance to change—and political
conservatism as an ideology or a social movement. My colleagues and I
conducted a quantitative, meta-analytic review of nearly fifty years of
research conducted in 12 different countries and involving over 22,000
research participants or individual cases. We found 88 studies that had
investigated correlations between personality characteristics and
various psychological needs, motives, and tendencies, on one hand, and
political attitudes and opinions, on the other.
And what did it show?
We
found pretty clear and consistent correlations between psychological
motives to reduce and manage uncertainty and threat—as measured with
standard psychometric scales used to gauge personal needs for order,
structure, and closure, intolerance of ambiguity, cognitive simplicity
vs. complexity, death anxiety, perceptions of a dangerous world,
etc.—and identification with and endorsement of politically conservative
(vs. liberal) opinions, leaders, parties, and policies.
How did politicians misunderstand the paper, and how did you respond?
I
suspect that there were some honest misunderstandings as well as some
other kinds. One issue is that many people seem to assume that whatever
psychologists are studying must be considered (by the researchers, at
least) as abnormal or pathological. But that is simply untrue. Social,
cognitive, developmental, personality, and political psychologists are
all far more likely to study attitudes and behaviors that are normal,
ordinary, and mundane. We are primarily interested in understanding the
dynamics of everyday life. In any case, none of the variables that my
colleagues and I investigated had anything to do with psychopathology;
we were looking at variability in normal ranges within the population
and whether specific psychological characteristics were correlated with
political opinions. We tried to point some of these things out,
encouraging people to read beyond the title, and emphasizing that there
are advantages as well as disadvantages to being high vs. low on the
need for cognitive closure, cognitive complexity, sensitivity to threat,
and so on.
How has that paper been built on since?
I
am gratified and amazed at how many research teams all over the world
have taken our ideas and refined, extended, and otherwise built upon
them over the last decade. To begin with, a number of studies have
confirmed that political conservatism and right-wing orientation are
associated with various measures of system justification. And public
opinion research involving nationally representative samples from all
over the world establishes that the two core value dimensions that we
proposed to separate the right from the left—traditionalism (or
resistance to change) and acceptance of inequality—are indeed correlated
with one another, and they are generally (but not always) associated
with system justification, conservatism, and right-wing orientation.
Since
2003, numerous studies have replicated the correlations we observed
between epistemic motives, including personal needs for order,
structure, and closure and resistance to change, acceptance of
inequality, system justification, conservatism, and right-wing
orientation. Several find that liberals score higher than conservatives
on the need for cognition, which captures the individual’s chronic
tendency to enjoy effortful forms of thinking. This finding is
potentially important because individuals who score lower on the need
for cognition favor quick, intuitive, heuristic processing of new
information, whereas those who score higher are more likely to engage in
more elaborate, systematic processing (what Daniel Kahneman refers to
as System 1 and System 2 thinking, respectively). The relationship
between epistemic motivation and political orientation has also been
explored in research on nonverbal behavior and neurocognitive structure
and functioning.
Various labs have also replicated the
correlations we observed between existential motives, including
attention and sensitivity to dangerous and threatening stimuli, and
resistance to change, acceptance of inequality, and conservatism.
Ingenious experiments have demonstrated that temporary activation of
epistemic needs to reduce uncertainty or to attain a sense of control or
closure increases the appeal of system justification, conservatism, and
right-wing orientation. Experiments have demonstrated that temporary
activation of existential needs to manage threat and anxiety likewise
increases the appeal of system justification, conservatism, and
right-wing orientation, all other things being equal. These experiments
are especially valuable because they identify causal relationships
between psychological motives and political orientation.
Progress
has also been made in understanding connections between personality
characteristics and political orientation. In terms of “Big Five”
personality traits, studies involving students and nationally
representative samples of adults tell exactly the same story: Openness
to new experiences is positively associated with a liberal orientation,
whereas Conscientiousness (especially the need for order) is positively
associated with conservative orientation. In a few longitudinal studies,
childhood measures of intolerance of ambiguity, uncertainty, and
complexity as well as sensitivity to fear, threat, and danger have been
found to predict conservative orientation later in life. Finally, we
have observed that throughout North America and Western Europe,
conservatives report being happier and more satisfied than liberals, and
this difference is partially (but not completely) explained by system
justification and the acceptance of inequality as legitimate. As we
suspected many years ago, there appears to be an emotional or hedonic
cost to seeing the system as unjust and in need of significant change.
“Moral
foundations theory” has gotten a lot of popular press, as well as
serious attention in the research community, but for those not familiar
with it, could you give us a brief description, and then say something
about why it is problematic on its face (particularly in light of the
research discussed above)?
The basic idea is that there
are five or six innate (evolutionarily prepared) bases for human “moral”
judgment and behavior, namely fairness (which moral foundations
theorists understand largely in terms of reciprocity), avoidance of
harm, ingroup loyalty, obedience to authority, and the enforcement of
purity standards. My main problem is that sometimes moral foundations
theorists write descriptively as if these are purely subjective
considerations—that people think and act as if morality requires us to
obey authority, be loyal to the group, and so on. I have no problem with
that descriptive claim—although this is surely only a small subset of
the things that people might think are morally relevant—as long as we
acknowledge that people could be wrong when they think and act as if
these are inherently moral considerations.
At other times,
however, moral foundations theorists write prescriptively, as if these
“foundations” should be given equal weight, objectively speaking, that
all of them should be considered virtues, and that anyone who rejects
any of them is ignoring an important part of what it means to be a moral
human being. I and others have pointed out that many of the worst
atrocities in human history have been committed not merely in the name
of group loyalty, obedience to authority, and the enforcement of purity
standards, but because of a faithful application of these principles.
For 24 centuries, Western philosophers have concluded that treating
people fairly and minimizing harm should, when it comes to morality,
trump group loyalty, deference to authority, and purification. In many
cases, behaving ethically requires impartiality and disobedience and the
overcoming of gut-level reactions that may lead us toward nepotism,
deference, and acting on the basis of disgust and other emotional
intuitions. It may be difficult to overcome these things, but isn’t this
what morality requires of us?
There have been a number of initial critical studies published, which you cite in this new paper. What have they shown?
Part
of the problem is that moral foundations theorists framed their work,
for rhetorical purposes, in strong contrast to other research in social
and political psychology, including work that I’ve been associated with.
But this was unnecessary from the start and, in retrospect, entirely
misleading. They basically said: “Past work suggests that conservatism
is motivated by psychological needs to reduce uncertainty and threat and
that it is associated with authoritarianism and social dominance, but
we say that it is motivated by genuinely moral—not immoral or
amoral—concerns for group loyalty, obedience to authority, and purity.”
This has turned out to be a false juxtaposition on many levels.
First
researchers in England and the Netherlands demonstrated that threat
sensitivity is in fact associated with group loyalty, obedience to
authority, and purity. For instance, perceptions of a dangerous world
predict the endorsement of these three values, but not the endorsement
of fairness or harm avoidance. Second, a few research teams in the U.S.
and New Zealand discovered that authoritarianism and social dominance
orientation were positively associated with the moral valuation of
ingroup, authority, and purity but not with the valuation of fairness
and avoidance of harm. Psychologically speaking, the three so-called
“binding foundations” look quite different from the two more humanistic
ones.
What haven’t these earlier studies tackled that you wanted to address? And why was this important?
These
other studies suggested that there was a reasonably close connection
between authoritarianism and the endorsement of ingroup, authority, and
purity concerns, but they did not investigate the possibility that
individual differences in authoritarianism and social dominance
orientation could explain, in a statistical sense, why conservatives
value ingroup, authority, and purity significantly more than liberals do
and—just as important, but often glossed over in the literature on
moral foundations theory—why liberals value fairness and the avoidance
of harm significantly more than conservatives do.
How did
you go about tackling these unanswered questions? What did you find and
how did it compare with what you might have expected?
There
was a graduate student named Matthew Kugler (who was then studying at
Princeton) who attended a friendly debate about moral foundations theory
that I participated in and, after hearing my remarks, decided to see
whether the differences between liberals and conservatives in terms of
moral intuitions would disappear after statistically adjusting for
authoritarianism and social dominance orientation. He conducted a few
studies and found that it did, and then he contacted me, and we ended up
collaborating on this research, collecting additional data using newer
measures developed by moral foundations theorists as well as measures of
outgroup hostility.
What does it mean for moral foundations theory?
To
me, it means that scholars may need to clean up some of the conceptual
confusion in this area of moral psychology, and researchers need to face
up to the fact that some moral intuitions (things that people may think
are morally relevant and may use as a basis for judging others) may
lead people to behave in an unethical, discriminatory manner. But we
need behavioral research, such as studies of actual discrimination, to
see if this is actually the case. So far the evidence is mainly
circumstantial.
And what future research is to come along these lines from you?
One
of my students decided to investigate the relationship between system
justification and its motivational antecedents, on one hand, and the
endorsement of moral foundations, on the other. This work also suggests
that the rhetorical contrast between moral foundations theory and other
research in social psychology was exaggerated. We are finding that, of
the variables we have included, empathy is the best psychological
predictor of endorsing fairness and the avoidance of harm as moral
concerns, whereas the endorsement of group loyalty, obedience to
authority, and purity concerns is indeed linked to epistemic motives to
reduce uncertainty (such as the need for cognitive closure) and
existential motives to reduce threat (such as death anxiety) and to
system justification in the economic domain. So, at a descriptive level,
moral foundations theory is entirely consistent with system
justification theory.
Finally, I’ve only asked some
selective questions, and I’d like to conclude by asking what I always
ask in interviews like this—What’s the most important question that I
didn’t ask? And what’s the answer to it?
Do I think that
social science can help to address some of the problems we face as a
society? Yes, I am holding out hope that it can, at least in the long
run, and hoping that our leaders will come to realize this eventually.
Our
conversation leads me to want to add one more question. Haidt’s basic
argument could be characterized as a combination of anthropology–look at
all the “moral principles” different cultures have advanced—and the
broad equation of morality with the restraint of individual
self-interest and/or desire. Your paper, bringing to attention the roles
of SDO and RWA, throws into sharp relief a key problem with such a
formulation—one that Southern elites have understood for centuries:
wholly legitimate individual self-interest (and even morality—adequately
feeding & providing a decent future for one’s children, for
example) can be easily over-ridden by appeals to heinous “moral
concerns”, such as “racial purity”, or more broadly, upholding the
“God-given racial order.”
Yet, Haidt does seem to
have an important point that individualist moral concern leave
something unsaid about the value of the social dimension of human
experience, which earlier moral traditions have addressed. Do you see
any way forward toward developing a more nuanced account of morality
that benefits from the criticism that harm-avoidance and fairness may be
too narrow a foundation without embracing the sorts of problematic
alternatives put forward so far?
Yes, and there is long
tradition of theory and research on social justice—going all the way
back to Aristotle—that involves a rich, complex, nuanced analysis of
ethical dilemmas that goes well beyond the assumption that fairness is
simply about positive and negative reciprocity.
Without question,
we are a social species with relational needs and dependencies, and how
we treat other people is fundamental to human life, especially when it
comes to our capacity for cooperation and social organization. When we
are not engaging in some form of rationalization, there are clearly
recognizable standards of procedural justice, distributive justice,
interactional justice, and so on. Even within the domain of distributive
justice—which has to do with the allocation of benefits and burdens in
society—there are distinct principles of equity, equality, and need, and
in some situations these principles may be in conflict or
contradiction.
How to reconcile or integrate these various
principles in theory and practice is no simple matter, and this, it
seems to me, is what we should focus on working out. We should also
focus on solving other dilemmas, such as how to integrate utilitarian,
deontological, virtue-theoretical, and social contractualist forms of
moral reasoning, because each of these—in my view—has some legitimate
claim on our attention as moral agents.
No comments:
Post a Comment