Monday, May 4, 2026

 Photo of the Supreme Court Building.

How the Supreme Court Demolished the Voting Rights Act

For two decades, the conservative Justices worked to eliminate a bulwark of the civil-rights era.

Photograph by Annabelle Gordon / Bloomberg / Getty

For decades, the Supreme Court has steadily worked to transform the concept of discrimination based on race, from the civil-rights-era vision that the government has an obligation to remedy and prevent racial discrimination to a view that the legal and moral wrong is to see race at all and make any decisions in consideration of it. As Chief Justice John Robertsput it in a 2007 ruling that disallowed a race-conscious measure to address de-facto desegregation in public schools, “the way to stop discrimination on the basis of race is to stop discriminating on the basis of race.” On Wednesday, the Court issued its long-awaited decision in Louisiana v. Callais, a case about drawing electoral districts that embodied the clash between those two viewpoints. In Justice Samuel Alito’s opinion for the six-Justice majority, the Court’s idea of racial equality turned out to correspond to a downright dystopian vision of our electoral democracy. The consequence is that the Voting Rights Act of 1965—a landmark statute that was intended to insure racially equal electoral opportunity—has been read out of existence.

Ratified a few years after the Civil War, the Fifteenth Amendment provides that citizens’ right to vote “shall not be denied or abridged by the United States or by any State on account of race, color, or previous condition of servitude.” The amendment gives Congress the power to enact “appropriate legislation” to enforce it, and—after a century marked by racial violence and intimidation, and myriad schemes, supposedly race-neutral, to disenfranchise Black citizens—Congress used that authority to enact the Voting Rights Act of 1965, to fight against the constant attempts to suppress the Black vote. Section 2 of the V.R.A. initially prohibited states from imposing any rules “to deny or abridge” the right to vote “on account of race or color.” In 1980, in a case in which Black plaintiffs alleged that the city of Mobile, Alabama’s at-large election system for choosing members of a commission diluted Black votes, the Supreme Court interpreted the V.R.A. provision to mean that a state does not violate it unless its action is “motivated by a discriminatory purpose.” In response, Congress changed the statute’s text to what it says today: that states must not implement any electoral rule that “results in a denial or abridgement” of voting rights on account of race.

By Section 2’s terms, a violation can be proved by showing, based on the “totality of the circumstances,” that a state’s electoral processes “are not equally open to participation” by members of a racial group, in that they “have less opportunity than other members of the electorate to participate in the political process and to elect representatives of their choice.” Section 2 explicitly indicates that one of the circumstances to consider in determining a violation is the extent to which members of the racial group “have been elected to office in the State.” It also specifies that racial groups do not have a right to be “elected in numbers equal to their proportion in the population.” In other words, the V.R.A. does not require racially proportional representation, yet it makes clear that equal electoral opportunity means the chance to elect one’s preferred representatives, who may include representatives of one’s racial group. If, for example, a state’s Black voters are concentrated in a city, but the state’s electoral map is drawn to disperse them into multiple districts, where white majorities can be expected to vote against their preferences, Black voters will perhaps have no opportunity “to elect representatives of their choice.” Thus, the V.R.A.’s equal-opportunity requirement has meant that a state may be required to create one or more majority-minority districts in order to provide racial minorities a chance to elect their preferred representatives.

ADVERTISEMENT

For the past forty years, courts have had to acknowledge that Congress in Section 2 meant to address racially discriminatory effects on voting, regardless of discriminatory intent. But, as the Supreme Court became increasingly clear in its view that not being color-blind amounts to racial discrimination, a Catch-22 developed, wherein states’ attempts to avoid violating the V.R.A. on one side might risk a constitutional violation on the other side, with each move resulting in a possible finding of racial discrimination. In 2022, a federal district court found that Louisiana had likely violated Section 2 of the V.R.A. by creating only one majority-Black electoral district in its map drawn after the 2020 census. But when the state then attempted to comply by creating a second majority-Black district, a group of non-Black voters challenged the new map as an allegedly unconstitutional racial gerrymander. A three-judge federal district court held that the map was a racial gerrymander that violates the equal-protection clause of the Fourteenth Amendment. (Federal law provided for direct appeal to the Supreme Court.)

This week, the Supreme Court affirmed that ruling, holding that Louisiana’s map with the second majority-Black district violated the Constitution. The Court called the drawing of the district “racial discrimination” for which the state had no “compelling interest”—because the V.R.A., when “properly interpreted,” the Court concluded, did not require it to exist. (The Court did not say that the first majority-Black district was unconstitutional, but it left little reason to assume that it couldn’t be successfully challenged as well.) The Court reached this decision by narrowing the meaning of Section 2 to what it was before Congress amended the statute in 1982. The Court’s new interpretation is that the only way for a state to violate Section 2 is to intentionally discriminate, despite Congress having made clear through the statutory amendment that Section 2’s concern was discriminatory effect, not intent. Justice Alito justified this reading by asserting that, because the Fifteenth Amendment itself can only be violated by intentional discrimination, Congress would have exceeded its Fifteenth Amendment authority if it had legislated to prohibit “mere disparate impact.”

This drastically diminished interpretation of the V.R.A., combined with the Court’s long-building view that majority-minority districts are forms of “racial discrimination” that the Constitution “almost never permits,” has momentous practical consequences for the electoral system. After Callais, every existing majority-minority district is vulnerable to being deemed an unconstitutional act of racial discrimination. We can immediately expect a cascade of lawsuits challenging states’ districting maps, and some states may rid themselves of majority-minority districts without waiting to be sued. Though Alito did not say majority-minority districts could never be required, the upshot of his opinion is that it will be impossible, except in the rarest circumstances, for a plaintiff to show that a state’s refusal to create—or its elimination of—a majority-minority district has intentionally discriminated based on race.

Justice Alito didn’t stop there. He went on to write what amounts to an instruction manual for Republican-led state legislatures on how best to justify districting practices that have a clear discriminatory effect on Black voters. His unmistakable advice was to use the fact that Black voters tend to vote Democratic to defend drawing a map that severely weakens Black voting power, by framing the districting as having a partisan—rather than a racial—purpose. Alito made the point that, because of the 2019 case Rucho v. Common Cause, which established that federal courts will not hear constitutional challenges to partisan gerrymandering, states are free to draw districts to “achieve partisan advantage,” even to an extreme degree—say, to insure that every electoral district in a state is a lock for a Republican victory. Concern for the rights of Black voters barely broke to the surface of Alito’s opinion. He did, however, zealously guard states’ prerogative to gerrymander, warning that “litigants cannot circumvent” Rucho by “dressing their political-gerrymandering claims in racial garb.” Alito’s message to states is: Go ahead and gerrymander the hell out of your electoral districts in ways that effectively eliminate the voting power of racial minorities. The Court has your back.

In a piercing and animated dissenting opinion, Justice Elena Kagan, joined by Justices Sonia Sotomayor and Ketanji Brown Jackson, sized up the Court’s decision as “straight-facedly” holding that “the Voting Rights Act must be brought low to make the world safe for partisan gerrymanders.” What will ensue, she predicted, is the worsening of how “this country’s two major parties compete in a race to the bottom.” Her dissent was not only concerned about partisanship; Kagan warned that the Callais decision threatens the fundamentals of how our constitutional democracy works. It is Congress’s job to make laws, and it did so in the Voting Rights Act. The Court’s job is to interpret the law—not to rewrite a statute that Justices do not like. As Kagan recounted, the Court’s conservative majority “has had its sights set on the Voting Rights Act” since 2013, when it eviscerated Section 5 of the statute, which required jurisdictions with a history of voting discrimination to seek federal preclearance of any new voting rules. And in 2021 the Court required Section 2 plaintiffs challenging burdens on casting ballots to focus on discriminatory intent rather than discriminatory effect, with the result that no Section 2 challenge since then has succeeded. In the Court’s inexorable march “to destroy” the V.R.A., Callais, Kagan wrote, was the final piece in the “now-completed demolition of the Voting Rights Act.” The statute, she continued, “was born of the literal blood of Union soldiers and civil rights marchers. It ushered in awe-inspiring change, bringing this Nation closer to fulfilling the ideals of democracy and racial equality. And it has been repeatedly, and overwhelmingly, reauthorized by the people’s representatives in Congress. Only they have the right to say it is no longer needed—not the Members of this Court.”

ADVERTISEMENT

Through its long-sought destruction of the Voting Rights Act, the Supreme Court has provided an illustration of how our democracy fails. It has usurped the work of Congress, and effectively vetoed legislation that was intended to achieve the democratic ideals of a multiracial people. Justice Kagan has sometimes been viewed as a strategist rather than a polemicist, who tries to forge agreements with the Court’s more moderate conservative members. But, in this dissent—which may well be remembered as her greatest—she freely displayed her virtuosic incandescence. She concluded, “I dissent because Congress elected otherwise. I dissent because the Court betrays its duty to faithfully implement the great statute Congress wrote. I dissent because the Court’s decision will set back the foundational right Congress granted of racial equality in electoral opportunity. I dissent.” ♦

 

Two Hundred and Fifty Years of Complicated Commemorations

Donald Trump’s aversion to admitting fault suggests that we will not likely see events that grapple with the nuanced nature of the nation’s history this July 4th.
Collage of the Statue of Liberty Capitol Building White House Liberty Bell and American Flag.
Photo illustration by Cristiana Couceiro; Source photographs from Getty

Commemorations often tell us as much about the times in which they are being held as they do about the events they are commemorating. The two-hundred-and-fiftieth anniversary of the adoption of the Declaration of Independence, this July 4th, falls at a moment when the nation is being led by a twice-impeached President amid a widely recognized crisis of democracy—last Wednesday, the Supreme Court’s ruling in Louisiana v. Callais further weakened an already enfeebled Voting Rights Act—and in a social climate whose volatility might be measured by acts of political violence. The most dramatic recent example came with the frantic apprehension of an armed man in the hotel where the White House Correspondents’ Association dinner was being held. The alleged gunman fortunately did not reach the ballroom where the President and the other attendees were gathered, but he has been charged with an assassination attempt—the third made against Donald Trump in two years. This is the backdrop against which our recollections of the nation’s origins are taking place.

Commemoration has been a complicated undertaking in this country from the start. On July 4, 1826, President John Quincy Adams decided to forgo making a major speech and instead rode by carriage in a parade to the Capitol, where he listened to celebratory remarks and a reading of the Declaration. He would later find out that two of his predecessors—his father, John Adams, and Thomas Jefferson—had died that day. Once fierce rivals, the two men were responsible for the country’s first peaceful transfer of power between parties, after Jefferson and his Democratic-Republican Party defeated Adams and the Federalists in the election of 1800.

The nation’s centennial, in 1876, arrived in the wake of a civil war that had left some seven hundred thousand Americans dead. The Declaration’s insurrectionist contention—that people, when unjustly provoked, have the right to dissolve their government—hung heavily in a country that had just witnessed the eleven states of the Confederacy make the same argument. That year, President Ulysses S. Grant, who had commanded the Union forces, spoke in his annual address to Congress of the great economic and social progress that the United States had made during its first hundred years. But, though the guns of war were a decade in the past, the nation had not escaped the spectre of conflict. The Presidential election of 1876 pitted the Democrat Samuel J. Tilden against the Republican Rutherford B. Hayes, but disputed returns from Florida, Louisiana, and South Carolina, where there had been violence directed at Black voters, cast the results into doubt. With neither candidate gaining a clear majority in the Electoral College, the election was turned over to a special commission, which declared Hayes the winner. A compromise was then struck, insuring that Hayes could take office in exchange for the Republicans’ promising to cease the federal occupation of the former Confederate states—thereby ending the period known as Reconstruction. Grant was compelled to celebrate the nation’s hundredth anniversary just as its boldest experiment in democracy to date was being dismantled.

The nation’s bicentennial, the only major commemoration in living memory, was framed by the exigencies of the Cold War and by the scarcely healed scars of the Vietnam War, as well as by Watergate and President Richard Nixon’s consequent resignation. President Gerald Ford’s Fourth of July address hit the boilerplate notes of progress and enduring values in a speech so anodyne that his audience might have overlooked the fact that the national festivities were being led by a man who had been elected to neither the Vice-Presidency (Nixon appointed him, in 1973) nor the Presidency (he assumed that office when Nixon resigned). The cumulative lesson of all this recollection is that, in commemorating the past, we may not like all that it calls attention to in the present.

There are other contexts worth considering at this moment. The semiquincentennial inherently underscores the comparative youth of the United States; Italy, Greece, China, and India count their historic legacies in millennia, not centuries. At the same time, the past two hundred and fifty years of popular self-government represent a global milestone: the Declaration of Independence marks the birth of the world’s oldest constitutional democracy. Those divergent truths point to the sober reality that the vast majority of people who have existed on this planet lived under some form of tyranny. The events of July 4, 1776, signalled a partial departure from a miserably well-worn path in human history. In that light, two hundred and fifty years seems like a very long time.

ADVERTISEMENT

The meaning of the Declaration was not entirely clear at the outset, even to those who wrote or signed it. With the haphazard spelling of the era, the signers refer to themselves in the final paragraph of the document as “Representatives of the united States.” The use of the lowercase with the word “united” suggests that it is serving as an adjective, not as a herald of the new nation’s name. (A subsequent line refers to the confederation as “United Colonies”—there were still things to be ironed out.) Notably, it was not until after the end of the Civil War that American grammar reliably described the United States in the singular: “the United States is” rather than “the United States are.” Other editorial decisions in the founding document were less ambiguous. Jefferson’s first draft contained a hundred-and-sixty-eight-word denunciation of the transatlantic slave trade, which was excised from the final text. A seed of conflict was sown in that moment.

Donald Trump’s personal aversion to admitting fault suggests that we will not likely see commemorations that grapple with the nuanced, complicated nature of the country’s founding and subsequent history this Fourth of July. What is more likely is that, fifty years hence, Americans will observe that this anniversary took place during a polarizing time, but that such circumstances were not unprecedented. One version of the nation’s history anchors itself in the efforts to navigate those tempests, to better the imperfect tools bequeathed to us by imperfect men. This more mature approach to our past recognizes that national greatness does not exist without a simultaneous reckoning with national failure—and that this undertaking, rather than diminishing American standing, is the surest path toward a country where “united” is as much an aspiration as an adjective. ♦

Sunday, May 3, 2026

On Jung

 IDEAS

Jung’s Five Pillars of a Good Life

The great Swiss psychoanalyst left us a surprisingly practical guide to being happier.

An illustration showing a man examining five pillars of happiness.
Illustration by Jan Buchczik

In the world of popular psychology, the work of one giant figure is hard to avoid: Carl Jung, the onetime associate of Sigmund Freud who died more than 60 years ago. If you think you have a complex about something, the Swiss psychiatrist invented that term. Are you an extrovert or an introvert? Those are his coinages, too. Personaarchetypesynchronicity: Jung, Jung, Jung.

When it comes to happiness, though, Jung can seem a bit of a downer. “‘Happiness,’” he wrote, “is such a remarkable reality that there is nobody who does not long for it.” So far, so good. But he does not leave it there: “And yet there is not a single objective criterion which would prove beyond all doubt that this condition necessarily exists.”

Clearly, this observation should not discourage any serious student of happiness. On the contrary, Jung is stating the manifest truth that we cannot lay hold of any blissful end state of pure happiness, because every human life is bound to involve negative emotions, which in fact arose to alert us to threats and keep us safe. Rather, the objective should be progress—or, in the words of Oprah Winfrey, my co-author on our recent book, Build the Life You Want, “happierness.”

If Jung was a happiness skeptic in some sense, however, he was by no means a denialist. In 1960, as he neared the end of his long life, Jung shared his ownstrategy for realizing that goal of progress. Refined with the aid of modern social science, Jung’s precepts might be just what you’re looking for in your life.

Jung believed that making progress toward happiness was built on five pillars.

1. Good physical and mental health
Jung believed that getting happier required soundness of mind and body. His thesis is supported by plenty of research. For example, the longest-running study of happiness—the Harvard Study of Adult Development—has shownthat four of the biggest predictors of a senior citizen’s well-being are not smoking excessively, drinking alcohol moderately if at all, maintaining a healthy body weight, and exercising. Even more important for well-being is good mental health. Indeed, one study from 2013 showed that poor mental health among Britons, Germans, and Australians predicted nearly two to roughly six times as much misery as poor physical health did.

This raises what might seem like a nitpick with Jung’s contention: Good health practices seem not to raise happiness, but rather to lower unhappiness. Today, many emotion researchers have uncovered evidence of a phenomenon that Jung did not conceive of: Negative and positive emotions appear to beseparable phenomena and not opposites; well-being requires a focus on each. Furthermore, researchers have identified how activities such as physical exercise can interrupt the cycle of negative emotion during moments of heightened stress, by helping moderate cortisol-hormone levels. I have found in my own work that this helps explain why people with naturally low levels of negative emotion tend to struggle with staying on a regular exercise regimen: They may feel less benefit to their well-being from going to the gym than people naturally higher in negative feelings do.

2. Good personal and intimate relations, such as those of marriage, family, and friendships
The intertwined notions that close relationships are at the heart of well-being and that cultivating them will reliably increase happiness are unambiguously true. Indeed, of the four best life investments for increasing personal satisfaction, two involve family and friendships (the others are in faith or philosophy, and meaningful work; more on these in a moment). And as for marriage, an institution that has taken a beating over recent decades, more and more evidence is piling up from scholars that being wed makes the majority of people happier than they otherwise would be, as the University of Virginia sociologist Brad Wilcox has argued. This research seemed so conclusive to Wilcox that he titled his recent book, simply, Get Married. Jung himself was married to his wife, Emma, for 52 years, until her death at the age of 73.

The Harvard Study of Adult Development comes to one conclusion more definitively than any other. In the words of my Harvard colleague Robert Waldinger, who has directed the project for nearly two decades, and his co-author, Marc Schulz, “Good relationships keep us healthier and happier. Period.” Waldinger’s predecessor running the study, George Vaillant, was just as unequivocal about the evidence: “Happiness is love. Full stop.”

3. Seeing beauty in art and in nature
Jung believed that happiness required one to cultivate an appreciation for beautiful things and experiences. Although this might sound intuitively obvious, the actuality is more complicated.

Long before I focused my scholarly life on happiness, I was dedicated to art and beauty. My earliest memories are of painting with my artist mother; I learned to read music before written language; I made my living as a classical musician from ages 19 to 31. News flash: Artists are generally not the world’s most blissfully satisfied people. In a 1992 study from Britain, researchers found that performing artists reported depression at higher rates than the control group. At some point, I will write a book not on the art of happinessbut on the very troublesome happiness of art.

Among nonartists, however, the issue is somewhat simpler and in line with Jung’s thinking. First, a big difference exists between beauty in nature and beauty in art. Specifically, engagement with nature’s beauty is known, across different cultures, to enhance well-being. Second, with aesthetic experience, happiness depends on the artistic mood. For example, experiments haveshown that if you listen to happy music on your own, it makes you feel happier; if you listen to sad music while alone, it makes you feel sadder.

4. A reasonable standard of living and satisfactory work
As with physical and mental health, employment and income seem tied more to eliminating unhappiness than to raising happiness. For one thing, scholars have long shown that unemployment is a reliable source of misery: Depressive symptoms typically rise when people, both men and women, are unemployed. This cannot be explained simply by the lack of material and social resources that typically accompanies joblessness; rather, work itself helps protect mental health.

But if we can upgrade “satisfactory work” in Jung’s list to “meaningful work,” then positive gains in happiness do come into play. The two elements that make work meaningful for most people are earned success (a sense of accomplishing something valuable) and service to others. These can be achieved in almost any job.

The relationship between money and happiness is a hotly contested topic; older studies show that well-being tops out at relatively low income levels, but more recent studies show that such contentment continues to rise for much higher incomes. My own assessment of the evidence is that money alone cannot buy happiness, nor can spending money to acquire possessions make one happy; but having the money to pay for experiences with loved ones, to free up time to spend on meaningful activities, and to support good causes does enhance happiness.

5. A philosophical or religious outlook that fosters resilience
Jung argued that a good life requires a way of understanding why things happen the way they do, being able to zoom out from the tedious quotidian travails of life, and put events—including inevitable suffering—into perspective. The son of a pastor, Jung was deeply Christian in his worldview, as his own words published many years ago in The Atlantic make clear: “For it is not that ‘God’ is a myth, but that myth is the revelation of a divine life in man.” He did not insist that his spiritual path was the only one—“I do not imagine that in my reflections,” he wrote, “I have uttered a final truth”—and allowed that even a nonreligious, purely philosophical attitude could do. But everyone, he thought, should have some sense of transcendent belief or higher purpose.