Gordon Wood is the Dean of early American historians retired from Brown University. This short book summarizes his views on what he calls "Constitutionalism," which is to say, discussion of Constitutional issues the heyday of which is the late 1700's and early 1800's.
No doubt Wood does not believe that the War was about protecting slavery. The Somerset decision was barely noticed in the colonies.
Wood writes about a period—-late 18th and early 19th century—-which was the most fruitful time in our history in which constitutional issues were discussed led by the approval of the new Constitution.
The author says that when Madison said the biggest division at the convention was not the small states vs. the large states but it was between the slave states and the non-slave states that this observation was a “tactical faint” to get the discussion away from small vs. large states. He provides no footnote for this contention. I am dubious. P. 100
The revolution led to antislavery activity. This is news to me. P. 100
The revolution heightened awareness of slavery leading to it not being taken for granted without guilt. The possibility of ending slavery came about during the revolutionary period. It didn’t last of course. P. 102
The revolution led to the end of white indentured servitude. P. 106
The decline in servitude made black slavery more conspicuous. Owning slaves was put on the defensive. P. 107
The author says that little attention was paid to the Somerset decision. P. 108
Nearly everywhere a mounting sense during this time that slavery was on its way out. All an illusion. Slavery was moving to its greatest expansion. P. 110
Twenty thousand slaves joined the British side, a great liberation. P. 111
The tragically mistaken belief was that slavery was dying which could justified the Constitution’s support of slavery as in the infamous 3/5 clause. Compromising with slavery was better than dismembering the Union. P. 120
Despite the protections for slavery in the Constitution the document does not say that slaves are property and the word slave is not used but circumlocutions are used. P. 121
Wood seems to glory in detailing how the war led to Northern states outlawing slavery but even though the convention may have thought slavery was on its way they were tragically wrong.
The chief factor leading to Philadelphia according to Wood was an excess of democracy especially paper money. Rhode Island is the prime example. Wood seems to say the RI prospered with paper money with banks issuing the paper. Jefferson and Madison did not understand banks.
Rhode Island, which was the only state not to attend the Constitutional Convention, was a colony of oddballs. RI has its unique style of democracy. James Madison despised the state. In the Epilogue. For elitists like Madison Rhode Island was a symbol for all that was wrong in the country in the 1780’s with its excess issue of paper money.
Gordon Wood is professor emeritus at Brown University and author of the Pulitzer Prize-winning book The Radicalism of the American Revolution, as well as Empire of Liberty: A History of the Early Republic, 1789–1815, and dozens of other books and articles on the colonial period, the American Revolution and the early republic.
Q. Let me begin by asking you your initial reaction to the 1619 Project. When did you learn about it?
A. Well, I was surprised when I opened my Sunday New York Times in August and found the magazine containing the project. I had no warning about this. I read the first essay by Nikole Hannah-Jones, which alleges that the Revolution occurred primarily because of the Americans’ desire to save their slaves. She claims the British were on the warpath against the slave trade and slavery and that rebellion was the only hope for American slavery. This made the American Revolution out to be like the Civil War, where the South seceded to save and protect slavery, and that the Americans 70 years earlier revolted to protect their institution of slavery. I just couldn’t believe this.
I was surprised, as many other people were, by the scope of this thing, especially since it’s going to become the basis for high school education and has the authority of the New York Times behind it, and yet it is so wrong in so many ways.
Q. I want to return to the question of slavery and the American Revolution, but first I wanted to follow up, because you said you were not approached. Yet you are certainly one of the foremost authorities on the American Revolution, which the 1619 Project trains much of its fire on.
A. Yes, no one ever approached me. None of the leading scholars of the whole period from the Revolution to the Civil War, as far I know, have been consulted. I read the Jim McPherson interview and he was just as surprised as I was.
Q. Can you discuss the relationship between the American Revolution and the institution of slavery?
A. One of the things that I have emphasized in my writing is how many southerners and northerners in 1776 thought slavery was on its last legs and that it would naturally die away. You can find quotation after quotation from people seriously thinking that slavery was going to wither away in several decades. Now we know they couldn’t have been more wrong. But they lived with illusions and were so wrong about so many things. We may be living with illusions too. One of the big lessons of history is to realize how the past doesn’t know its future. We know how the story turned out, and we somehow assume they should know what we know, but they don’t, of course. They don’t know their future any more than we know our future, and so many of them thought that slavery would die away, and at first there was considerable evidence that that was indeed the case.
At the time of the Revolution, the Virginians had more slaves than they knew what to do with, so they were eager to end the international slave trade. But the Georgians and the South Carolinians weren’t ready to do that yet. That was one of the compromises that came out of the Constitutional Convention. The Deep South was given 20 years to import more slaves, but most Americans were confident that the despicable transatlantic slave trade was definitely going to end in 1808.
Q. Under the Jefferson administration?
A. Yes, it was set in the Constitution at 20 years, but everyone knew this would be ended because nearly everyone knew that this was a barbaric thing, importing people and so on. Many thought that ending the slave trade would set slavery itself on the road to extinction. Of course, they were wrong.
I think the important point to make about slavery is that it had existed for thousands of years without substantial criticism, and it existed all over the New World. It also existed elsewhere in the world. Western Europe had already more or less done away with slavery. Perhaps there was nothing elsewhere comparable to the plantation slavery that existed in the New World, but slavery was widely prevalent in Africa and Asia. There is still slavery today in the world.
And it existed in all of these places without substantial criticism. Then suddenly in the middle of the 18th century you begin to get some isolated Quakers coming out against it. But it’s the American Revolution that makes it a problem for the world. And the first real anti-slave movement takes place in North America. So this is what’s missed by these essays in the 1619 Project.
Q. The claim made by Nikole Hannah-Jones in the 1619 Project that the Revolution was really about founding a slavocracy seems to be coming from arguments made elsewhere that it was really Great Britain that was the progressive contestant in the conflict, and that the American Revolution was, in fact, a counterrevolution, basically a conspiracy to defend slavery.
A. It’s been argued by some historians, people other than Hannah-Jones, that some planters in colonial Virginia were worried about what the British might do about slavery. Certainly, Dunmore’s proclamation in 1775, which promised the slaves freedom if they joined the Crown’s cause, provoked many hesitant Virginia planters to become patriots. There may have been individuals who were worried about their slaves in 1776, but to see the whole revolution in those terms is to miss the complexity.
In 1776, Britain, despite the Somerset decision, was certainly not the great champion of antislavery that the Project 1619 suggests. Indeed, it is the northern states in 1776 that are the world’s leaders in the antislavery cause. The first anti-slavery meeting in the history of the world takes place in Philadelphia in 1775. That coincidence I think is important. I would have liked to have asked Hannah-Jones, how would she explain the fact that in 1791 in Virginia at the College of William and Mary, the Board of Visitors, the board of trustees, who were big slaveholding planters, awarded an honorary degree to Granville Sharp, who was the leading British abolitionist of the day. That’s the kind of question that should provoke historical curiosity. You ask yourself what were these slaveholding planters thinking? It’s the kind of question, the kind of seeming anomaly, that should provoke a historian into research.
The idea that the Revolution occurred as a means of protecting slavery—I just don’t think there is much evidence for it, and in fact the contrary is more true to what happened. The Revolution unleashed antislavery sentiments that led to the first abolition movements in the history of the world.
Q. In fact, those who claim that the American Revolution was a counterrevolution to protect slavery focus on the timing of the Somerset ruling of 1772, which held that slavery wasn’t supported by English common law, and Dunmore’s promise to free slaves who escape their masters.
A. To go from these few facts to create such an enormous argument is a problem. The Somerset decision was limited to England, where there were very few slaves, and it didn’t apply to the Caribbean. The British don’t get around to freeing the slaves in the West Indies until 1833, and if the Revolution hadn’t occurred, might never have done so then, because all of the southern colonies would have been opposed. So supposing the Americans hadn’t broken away, there would have been a larger number of slaveholders in the greater British world who might have been able to prolong slavery longer than 1833. The West Indies planters were too weak in the end to resist abolition. They did try to, but if they had had all those planters in the South still being part of the British Empire with them, that would have made it more difficult for the British Parliament to move toward abolition.
Q. Hannah-Jones refers to America’s founding documents as its founding myths…
A. Of course, there are great ironies in our history, but the men and the documents transcend their time. That Jefferson, a slaveholding aristocrat, has been—until recently—our spokesman for democracy, declaring that all men are created equal, is probably the greatest irony in American history. But the document he wrote and his confidence in the capacities of ordinary people are real, and not myths.
Jefferson was a very complicated figure. He took a stand against slavery as a young man in Virginia. He spoke out against it. He couldn’t get his colleagues to go along, but he was certainly courageous in voicing his opposition to slavery. Despite his outspokenness on slavery and other enlightened matters, his colleagues respected him enough to keep elevating him to positions in the state. His colleagues could have, as we say today, “cancelled” him if they didn’t have some sympathy for what he was saying.
Q. And after the Revolution?
A. American leaders think slavery is dying, but they couldn’t have been more wrong. Slavery grows stronger after the Revolution, but it’s concentrated in the South. North of the Mason-Dixon line, in every northern state by 1804, slavery is legally put on the road to extinction. Now, there’s certain “grandfathering in,” and so you do have slaves in New Jersey as late as the eve of the Civil War. But in the northern states, the massive movement against slavery was unprecedented in the history of the world. So to somehow turn this around and make the Revolution a means of preserving slavery is strange and contrary to the evidence.
As a result of the Revolution, slavery is confined to the South, and that puts the southern planters on the defensive. For the first time they have to defend the institution. If you go into the colonial records and look at the writings and diary of someone like William Byrd, who’s a very distinguished and learned person—he’s a member of the Royal Society—you’ll find no expressions of guilt whatsoever about slavery. He took his slaveholding for granted. But after the Revolution that’s no longer true. Southerners began to feel this anti-slave pressure now. They react to it by trying to give a positive defense of slavery. They had no need to defend slavery earlier because it was taken for granted as a natural part of a hierarchical society.
We should understand that slavery in the colonial period seemed to be simply the most base status in a whole hierarchy of dependencies and degrees of unfreedom. Indentured servitude was prevalent everywhere. Half the population that came to the colonies in the 18th century came as bonded servants. Servitude, of course, was not slavery, but it was a form of dependency and unfreedom that tended to obscure the uniqueness of racial slavery. Servants were bound over to masters for five or seven years. They couldn’t marry. They couldn’t own property. They belonged to their masters, who could sell them. Servitude was not life-time and was not racially-based, but it was a form of dependency and unfreedom. The Revolution attacked bonded servitude and by 1800 it scarcely existed anywhere in the US.
The elimination of servitude suddenly made slavery more conspicuous than it had been in a world of degrees of unfreedom. The antislavery movements arose out of these circumstances. As far as most northerners were concerned, this most base and despicable form of unfreedom must be eliminated along with all the other forms of unfreedom. These dependencies were simply incompatible with the meaning of the Revolution.
After the Revolution, Virginia had no vested interest in the international slave trade. Quite the contrary. Virginians began to grow wheat in place of tobacco. Washington does this, and he comes to see himself as more a farmer than a planter. He and other farmers begin renting out their slaves to people in Norfolk and Richmond, where they are paid wages. And many people thought that this might be the first step toward the eventual elimination of slavery. These anti-slave sentiments don’t last long in Virginia, but for a moment it seemed that Virginia, which dominated the country as no other state ever has, might abolish slavery as the northern states were doing. In fact, there were lots of manumissions and other anti-slave moves in Virginia in the 1780s.
But the black rebellion in Saint-Domingue—the Haitian Revolution—scares the bejesus out of the southerners. Many of the white Frenchmen fled to North America—to Louisiana, to Charleston, and they brought their fears of slave uprisings with them. Then, with Gabriel’s Rebellion in Virginia in 1800, most of the optimism that Virginians had in 1776—1790 is gone.
Of course, I think the ultimate turning point for both sections is the Missouri crisis of 1819–1820. Up to that point, both sections lived with illusions. The Missouri crisis causes the scales to fall away from the eyes of both northerners and southerners. Northerners come to realize that the South really intended to perpetuate slavery and extend it into the West. And southerners come to realize that the North is so opposed to slavery that it will attempt to block them from extending it into the West. From that moment on I think the Civil War became inevitable.
Q. There’s the famous quote from Jefferson that the Missouri crisis awakened him like a fire bell in the night and that in it he perceived the death of the union...
A. Right. He’s absolutely panicked by what’s happening, and these last years of his life leading up to 1826 are really quite sad because he’s saying these things. Reading his writings between 1819 and his death in 1826 makes you wince because he so often sounds like a southern fire-eater of the 1850s. Whereas his friend Madison has a much more balanced view of things, Jefferson becomes a furious and frightened defender of the South. He sees a catastrophe in the works, and he can’t do anything about it.
His friend Adams was, of course, opposed to slavery from the beginning, and this is something that Hannah-Jones should have been aware of. John Adams is the leading advocate in the Continental Congress for independence. He’s never been a slaveowner. He hates slavery and he has no vested interest in it. By 1819–1820, however, he more or less takes the view that the Virginians have a serious problem with slavery and they are going to have to work it out for themselves. He’s not going to preach to them. That’s essentially what he says to Jefferson.
By the early nineteenth century, Jefferson had what Annette Gordon-Reed calls “New England envy.” His granddaughter marries a New Englander and moves there, and she tells him how everything’s flourishing in Connecticut. The farms are all neat, clean and green, and there are no slaves. He envies the town meetings of New England, those little ward republics. And he just yearns for something like that for Virginia.
Q. How it is that the American Revolution raises the dignity of labor? Because it seems to me that this concept certainly becomes a burning issue by the time of the Civil War.
A. It’s a good question. Central to the middle class revolution was an unprecedented celebration of work, especially manual labor, including the working for money. For centuries going back to the ancient Greeks, work with one’s hands had been held in contempt. Aristotle had said that those who worked with their hands and especially those who worked for money lacked the capacity for virtue. This remained the common view until the American Revolution changed everything.
The northern celebration of work made the slaveholding South seem even more anomalous than it was. Assuming that work was despicable and mean was what justified slavery. Scorn for work and slavery were two sides of the same coin. Now the middle-class northerners—clerks, petty merchants, farmers, etc.—began attacking the leisured gentry as parasites living off the work of others. That was the gist of the writings of William Manning, the obscure Massachusetts farmer, writing in the 1790s. This celebration of work, of course, forced the slaveholding planters to be even more defensive and they began celebrating leisure as the source of high culture in contrast with the money-grubbing North.
Slavery required a culture that held labor in contempt. The North, with its celebration of labor, especially working for money, became even more different from the lazy, slaveholding South. By the 1850s, the two sections, though both American, possessed two different cultures.
Q. In my discussion with James Oakes, he made the point about the emergence of the Democratic Party in the 1820s, that in the North it can’t do what the southern slave owners really want it to do, which is to say slaves are property, but what they do instead is to begin to promote racism.
A. That’s right. When you have a republican society, it’s based on equality of all citizens; and now many whites found that difficult to accept. And they had to justify the segregation and the inferior status of the freed blacks by saying blacks were an inferior race. As I said earlier, in the Colonial period whites didn’t have to mount any racist arguments to justify the lowly status of blacks. In a hierarchical society with many degrees of unfreedom, you don’t bother with trying to explain or justify slavery or the unequal treatment of anyone. Someone like William Byrd never tries to justify slavery. He never argues that blacks are inferior. He doesn’t need to do that because he takes his whole world of inequality and hierarchy for granted. Racism develops in the decades following the Revolution because in a free republican society, whites needed a new justification for keeping blacks in an inferior and segregated place. And it became even more complicated when freed blacks with the suffrage tended to vote for the doomed parties of the Federalists and the Whigs.
Q. The 1619 Project claims basically that nothing has ever gotten any better. That it’s as bad now as it was during slavery, and instead what you’re describing is a very changed world...
Q. You spoke of the “consensus school” on American history before, from the 1950s, that saw the Revolution, I think, as essentially a conservative event. And one of the things that they stressed was that there was no aristocracy, no native aristocracy, in America, but you find, if I recall your argument in The Radicalism of the American Revolution, that though aristocracy was not strong, it was something that was still a powerful factor.
A. There’s no European-type aristocracy, the kind of rich, hereditary aristocracy of the sort that existed in England—great landholders living off the rents of their tenants. But we had an aristocracy of sorts. The southern slaveholding planters certainly came closest to the English model, but even in the more egalitarian North there was an aristocracy of sorts. Men of wealth and distinction that we would label elites sought to make the title of gentlemen equal some kind of aristocracy. “Gentleman” was a legal distinction, and such gentlemen were treated differently in the society because of that distinction. With the Revolution, all this came under assault.
It’s interesting to look at the debates that occur in the New York ratifying convention in 1788. The leading Anti-Federalist, Melancton Smith, a very smart guy but a middling sort and with no college graduate degree, gives the highly educated Alexander Hamilton and Robert Livingston a run for their money. He calls Hamilton and Livingston aristocrats and charges that the proposed Constitution was designed to give more power to the likes of them. Hamilton, who certainly felt superior to Smith, denied he was an aristocrat. There were no aristocrats in America, he said; they existed only in Europe. That kind of concession was multiplied ten thousand-fold in the following decades in the North, and this denial of obvious social superiority in the face of middling criticism is denied even today. You see politicians wanting to play down their distinctiveness, their elite status. “I can have a beer with Joe Six-pack,” they say, denying their social superiority. That was already present in the late 1780s. That’s what I mean by radicalism. It’s a middle-class revolution, and it is essentially confined to the North.
Q. You were speaking earlier of the despair of Madison, Adams and Jefferson late in life. And it just occurred to me that they lived to see Martin Van Buren.
A. That’s right. Van Buren is probably the first real politician in America elected to the presidency. Unlike his predecessors, he never did anything great; he never made a great speech, he never wrote a great document, he never won a great battle. He simply was the most politically astute operator that the United States had ever seen. He organized a party in New York that was the basis of his success.
Did you know that the “founding fathers” in the antebellum period are not Jefferson and Madison and Washington and Hamilton? In the antebellum period when most Americans referred to the “founders,” they meant John Smith, William Penn, William Bradford, John Winthrop and so on, the founders of the seventeenth century. There’s a good book on this subject by Wesley Frank Craven [ The Legend of the Founding Fathers(1956)].
It’s Lincoln who rescues the eighteenth-century founders for us. From the Civil War on, the “founders” become the ones we celebrate today, the revolutionary leaders. Lincoln makes Jefferson the great hero of America. “All honor to Jefferson,” he says. Only because of the Declaration of Independence. Jefferson didn’t have anything to do with the Constitution, and so Lincoln makes the Declaration the most important document in American history, which I think is true.
Q. For our readership, perhaps you could discuss something of the world-historical significance of the Revolution. Of course, we are under no illusion that it represented a socialist transformation. Yet it was a powerful revolution in its time.
Q. One of the ironies of this Project 1619 is that they are saying the same things about the Declaration of Independence as the fire-eating proponents of slavery said—that it’s a fraud. Meanwhile, abolitionists like Frederick Douglass upheld it and said we’re going to make this “all men are created equal” real.
A. That points up the problem with the whole project. It’s too bad that it’s going out into the schools with the authority of the New York Times behind it. That’s sad because it will color the views of all these youngsters who will receive the message of the 1619 Project.
Fall is here and it feels so good outside this morning. The kind of good I used to feel watching Ali fight especially when I won a dollar from one of my teachers when Cassius Clay beat Liston the first time. The kind of good I used to feel eating my Mother's fried chicken on Sundays. The kind of good I used to feel after a hot date turned out well. Like John Lennon's fool on the hill I see the world spinning round on a beautiful fall day like this.
Why won’t the GOP do more to avert so many foreseeable tragedies? Because it is afraid to take on anti-vaxxers and covid deniers, oil and gas interests, and the gun lobby. Due to a combination of extremism and expedience, Republicans are allowing problems to fester at great cost rather than dealing with them at the source.
The series of meetings comes five days before the House plans to take up a roughly $1 trillion package to improve the nation’s infrastructure — and as the prospects of a government shutdown and a breach of the debt ceiling loom.
New bombshells show Trump's coup threat was real and hasn't passed
Tuesday, September 21, 2021
“What’ll we do with ourselves this afternoon?” cried Daisy, “and the day after that, and the next thirty years?”
Decisions decisions, Leave it to that great existentialist Daisy Buchanan to put things in perspective.
As for me, I'll work till 5 and let tomorrow and the next thirty years take care of themselves. I have my hands full this afternoon with work. I can't see beyond that.
Senate Minority Leader Mitch McConnell (R-Ky.) has insisted Republicans will not answer pleas from the Biden administration to help increase or suspend the debt ceiling.
The White House press secretary said President Biden believes Gen. Mark A. Milley, chairman of the Joint Chiefs of Staff, is “patriotic” and has “complete confidence in his leadership.”
Even tech optimists admit that human capacities are limited in comparison with the digital edifice we have built, with potentially grave implications for our health.
“I think that I cannot preserve my health and spirits,” Thoreau wrote in his 1862 essay “Walking,” “unless I spend four hours a day at least—and it is commonly more than that—sauntering through the woods and over the hills and fields, absolutely free from all worldly engagements.” Who nowadays feels absolutely free from worldly engagements? Urban Dictionaryhas the term “goin’ Walden” as “the act of leaving the electric world and city and retreating to the country to obtain spiritual enlightenment or just to reflect for a while. Derived from the unexplicably LONG classic by Henry David Thoreau.”
A couple of years ago I took a chance to “go Walden” in person, if not in spirit. Brown University had invited me over from Scotland to give a lecture, and the following day I took a train up to Cambridge to stay with a friend. We left on bicycles the following morning. I don’t remember its taking longer than a couple of hours—wide trails, autumnal forests, glints of rivers, and front yards decked with plastic skeletons for Halloween. By the Mill Brook in Concord we stopped for a sandwich and, a few minutes later, there it was: Walden Pond. I’d expected the woods to be busy, but what I hadn’t anticipated was the forest of arms holding smartphones, taking selfies, engaging in video calls.
“Man is an embodied paradox,” wrote Charles Caleb Colton a generation before Thoreau, “a bundle of contradictions.” Why shouldn’t we create digital experiences of somewhere emblematic of analogue living, or sharable files of a place of disconnection? Rough-hewn stone markers, chained together like prisoners, had been arranged in a rectangle on the site of the great man’s cabin. On one of them sat a teenager, arms outstretched, angling his phone while shouting out to his retreating companions: “I don’t think you guys realize how much this place means to me, I mean, privately.” Later, back in Cambridge, I tweeted some pictures of the place myself.
In his 2010 book Hamlet’s BlackBerry,William Powers suggested that society might benefit from the creation of “Walden Zones”—areas of the home where digital technologies are banned in order to encourage more traditional methods of human connection. Given the ubiquity of devices among modern-day pilgrims to Walden, that ambition seems quaint, even futile.
As a working family physician, I’m shown examples every day of the ways in which new digital technologies are Janus-faced, both boon and curse, strengthening opportunities to connect even as they can deepen a sense of isolation. “Virtual” means “almost,” after all, and was hijacked as a descriptor of the digital world because it was once taken for granted that the creations of Silicon Valley aren’t quite real. Many people now are less sure of the distinction between virtual and actual. After more than a year of pandemic lockdowns through which we’ve all been obliged to meet virtually, I hear more and more patients complain that they emerge from sessions online feeling tense, anxious, low in mood, or with a pervasive feeling of unreality. Virtual connections have sustained us through a time when real interactions were too risky, but it’s worth asking: To what extent do those technologies carry risks of their own?
Susan Matt and Luke Fernandez are a married couple, respectively professors of history and of computing at Weber State University. They describe in their book how every morning for eighteen years they’ve breakfasted together in a house looking out over Ogden, Utah, toward the Great Salt Lake. Back in 2002 they’d listen to the radio and take in the view, then a few years later they added a newspaper but kept the radio going. Now they mostly check the news on their phones. In Bored, Lonely, Angry, Stupid: Changing Feelings About Technology, from the Telegraph to Twitter, they write, “We thought we could take it all in: the view, the news, the radio, our conversation. Our attention seemed limitless.”
But they began to wonder what they had traded for this new habit. Slowly, awareness dawned that the Internet was “changing our emotions, our expectations, our behaviors.” It seemed to offer too much information and too often the wrong information. They’d look at their phones with disappointment when their posts didn’t get the hoped-for attention, or relish the “dopamine fix” when they did. The little screens issued a “siren call” offering freedom to vent anger, which was almost always followed by regret and a nagging sense of frustration.
They knew that the unsettled feelings they were experiencing were nothing new: as far back as 1881 a neurologist named George Beard wrote of an epidemic of “American nervousness” caused by the proliferation of telegraph wires, railroad lines, and watches, all heightened by the “stress of electoral politics,” as if democracy itself were another disruptive human technology. Radio sets, too, were once thought of as an insidious threat to mental harmony: in 1930 Clarence Mendell, then dean of Yale, said that he’d prefer students didn’t bring them on campus (“I believe that life is already too complicated and noisy”). The sensationalism and immediacy of television was at first seen as a destroyer of family rituals—and manners.
Advertisement
So how much of the authors’ anxiety can be explained as middle-aged antipathy to change? Matt and Fernandez’s book is a scholarly attempt to track changes in social norms and in human emotions occasioned by advances in technology across a couple of centuries, but it concludes that our twenty-first-century situation is different from those earlier shifts both in the rate of change and in the problems introduced by cybertechnologies. They set out to examine what they see as a “new emotional style” taking shape, and single out six aspects of human experience for scrutiny: narcissism, loneliness, boredom, distraction, cynicism, and anger.
The word “narcissism” was coined in 1898 by Havelock Ellis—before that “vanity” was more often used, almost always pejoratively. For the average nineteenth-century American, to be vain was to sin against God. But by the mid-nineteenth century the increased presence in middle-class homes of mirrors, and then of photographs, made a virtue of self-regard, and by the 1950s anxiety over being seen as vain was giving way to a powerful social and marketing focus on the promotion of “self-esteem.” Instagram hasn’t done away with the shame of narcissism, it has just transformed it. “Contemporary Americans use social media and post selfies to advertise their success and to celebrate themselves, but they do it with anxiety,” write Matt and Fernandez, quoting interviews with young people who fret ceaselessly over how to get more digital affirmation and who describe their constant posting and “liking” as the most potent ways they have at their disposal to feel as if they exist.
Phones, because they are carried everywhere, have the potential to make us obsess over our own image in a way that mirrors or photographs never could. Of all the mental-health side effects of smartphone use, this, to me, is their most pernicious aspect. The creeping obligation to carry one with you at all times—to be constantly available to your friends and colleagues, to scan QR codes in order to complete basic transactions, to keep track of your step count—means that almost everyone, myself included, now carries a pocket panopticon. Narcissus had to find a pool to gaze into; we just pull out our phones.
The authors offer evidence that suggests the experience of loneliness, too, is being molded by the prevalence of smartphones: firstly by shrinking our capacity to appreciate solitude, secondly by unrealistically inflating our expectations about the number of social connections we should have, and thirdly by implying that “sociable fulfillment is easy, always possible, and the norm.” The ethereal and unreliable connections offered by social media are, for many who end up in my clinic, addictive but deeply unsatisfying. Online relationships don’t assuage loneliness but trap their users in a double bind of “How can I feel so bad if I’ve got this many ‘friends’?”—a bind that has been acutely exacerbated by the enforced isolation of the pandemic. That’s before considering the effect on those who don’t have many online or offline friends, and on teenagers who receive bullying messages through their phones or are socially ostracized. Teenagers now take school bullies home with them on their screens.
In clinic I regularly see kids, both popular and unpopular, who’ve developed the habit of scratching themselves with blades or taking small, symbolic overdoses of medication such as painkillers. No matter how hard I try to convince them otherwise, these young people tell me that the pain and catharsis of self-harm is the only reliable way they have to relieve their distress and to feel they’re really living. Rates of deliberate self-harm among ten-to-fourteen-year-old girls, by cutting or self-poisoning, nearly tripled in the US between 2009 and 2015. Although the ubiquity of smartphones makes it difficult now to establish a control group, there’s gathering evidence that social media use is a major causative factor. Those trends are also found in the UK: in 2019 a paper using data from the prestigious Millennium Cohort Study found that among fourteen-year-old girls, mental well-being deteriorated in proportion to hours spent on a smartphone, with almost 40 percent of girls who used their phones five to six hours a day suffering mental health problems, compared with about 15 percent of those who spent half an hour or less a day on their phones.
Matt and Fernandez also explore evidence of a contemporary decline in the ability to focus attention. They quote Les Linet, a child psychiatrist who studies attention deficit/hyperactivity disorders (ADD/ADHD) and who thinks of the conditions as “search for stimulation” disorders. In this view, sufferers of attention deficit easily become bored and can’t extract stimulation from ordinary environments. Apparently there is now a “boredom proneness scale,” and I wouldn’t be surprised if “pervasive boredom disorder” makes an appearance in lists of psychiatric diagnoses soon. (You read it here first.) In 2012, 11 percent of American youth had been diagnosed with some kind of attention deficit disorder, yet more than a quarter of college students at that time used drugs for the condition without prescriptions, as study enhancers.
Advertisement
The line between what constitutes clinically significant attention deficit and the heightened distractibility of the smartphone age is blurred, complicated by the increasing social expectation of access to psychiatric drugs based on desire and preference rather than clinical need—an expectation that makes the condition routinely overdiagnosed. My practice sits beside the University of Edinburgh, and every exam season I meet students who’ve noticed just how many of their American peers turbocharge study sessions with Ritalin and Adderall, and have approached me for a formal diagnosis so as to be eligible for a legal prescription and not miss out. “In the attention economy,” write Matt and Fernandez, “the chief means for making profits is through distraction. Because these distractions have become integral to the business process, they are at once vastly more ubiquitous and much more entrenched.”
Maël Renouard, a French novelist, translator, and former political speechwriter, is more hopeful and celebratory. For him the Internet is a playground rich in human possibility, but at the same time he is overwhelmed by it, worshipful in his admiration the way someone in the Middle Ages might have been overawed by a cathedral. His book Fragments of an Infinite Memory offers a series of thought experiments on the possibilities of online connectivity, winging the reader on flights of fancy that circle around the Internet’s impact on academia, our social lives, and its near-limitless capacity to fuel both nostalgia and the search for what’s new. It’s allusive and full of unexpected digressions, structurally experimental and ironic.
Googling yourself is a notoriously perilous activity, and Renouard tells a story attributed to Pliny the Elder of how new Roman emperors would have a carnifex gloriae (a butcher of glory) whispering “Remember that you will die” to them as they were carried through cheering crowds. “We, too, have our carnifices gloriae interretiales,” Renouard writes.“Every individual who attains the least bit of notoriety is sure to see, on the internet, a few not very obliging comments attached to his name.” And we no longer have the touching faith of the early years of the Internet, when it was believed that anonymity would facilitate sincerity—it’s clear now that it accelerates the reverse.
Whenever I hit the wrong button and ask how much time it would take to walk from Paris to Marseille or La Rochelle, and the machine calculates the number of days required for such a trip, extravagant in our times, I feel as though I’ve suddenly been transported to the Middle Ages, or into Marguerite Yourcenar’s The Abyss—and if my finger accidentally selects bicycle, silhouettes from 1940 appear in my mind’s eye, as if our technical capacities had carried us to the top of a kind of watchtower, from which we could look back on the most distant past and see it rise at the end of the landscape behind the last line of hills, like a troupe of minstrels, guildsmen, or penitents pursuing their long migrations through the forests and fields of Europe.
For Renouard, we are seeing the end of a civilization built and sustained by paper. Though reports on the death of the book have been repeatedly exaggerated, Renouard is adamant that we are all witness to that civilization’s passing, just as our great-grandparents were witness to the death of a world dependent on the strength, speed, and servitude of horses. “Books will never completely vanish,” Renouard writes, “just as horses are still around, in equestrian clubs, where they are objects of aesthetic worship and enjoy a level of care whose technical precision verges upon a science.” The books we love will become luxury items to be cherished, but corralled into artistic, leisurely circles.
The other day I tried giving a friend of mine a DVD of a movie I thought she’d enjoy. She looked at me blankly. “A DVD?” she said. “Do you guys still have a player?” It put in mind the millions of tons of DVDs now sedimenting slowly into the Anthropocene strata of the world’s landfill dumps, pushing down on the CDs, themselves piled over VHS and Betamax cassettes, LP and gramophone records. Owning a book for its beauty is one thing, but owning a book for the information it holds? Why would you? Not when Google has promised to “organize the world’s information” for the price of your advertising preferences.
And photography? Renouard quotes his friend B., who’d just returned from a trip touring Tunisia and hadn’t taken a single picture. “I told myself, more or less consciously,” B. said, “that I could find on Google Images as many representations as I liked of the places where I went, with an ease directly proportionate to the beauty of the place.” “More or less consciously”is an insight to savor, illuminating how deeply digital possibilities are able to reorganize our thinking. This change didn’t feel like an impoverishment but an enrichment: “I was glad to be fully present to the landscapes,” B. explains.
This, then, is the great liberation of Renouard’s Internet—a limitless provider of auxiliary memory, a robotic prosthesis we carry everywhere with us. Our phones might be silently counting our steps in our pockets, but it’s a door in our minds they’ve kicked open—a door that leads directly into the World Wide Web. Fragments of an Infinite Memory began, Renouard writes, with the “daydream that one day we would find on the internet the traces of even the most insignificant events of our existence,” as if “the internet was simply the medium that gave access to Being’s spontaneous memory of itself.”
But for all his tech-savvy optimism, the ease with which he slips into this new medium, Renouard ultimately shares the fears of Matt and Fernandez: human capacities are limited in comparison with the digital edifice we have built. We are “fragile beings in a kind of ontological backwater,” ill-adapted to the speed and possibility of what until twenty years ago was still called an “information superhighway.” We don’t call it that anymore because it reifies something that is no longer extraordinary. Indeed, that “superhighway” dominates and overlooks our lives and our minds, so tattered and vulnerable, “like an old neighborhood in a vastly expanding city.”
Renouard thinks that digital natives who have always known an Internet wouldn’t even notice the way older, shabbier media are being crowded out and, when those media are gone, are unlikely to feel they’ve lost anything. For the new generation, their “real” memories will be precisely those that have left a “virtual” trace: the
fragility of personal memory is limited to those amphibious beings who have lived a significant portion of their lives before the advent of the internet…. Perhaps those who grow up with the internet will leave enough traces of themselves to find their way through their own memories without fail.
For Renouard, the steadily shrinking neighborhood of his old “amphibious” order feels as if it’s under siege.
If Matt and Fernandez are the Priams of this siege, offering wise and cautious counsel, Renouard might be the Paris, the bon vivant happy to sleep with the enemy. Howard Axelrod, then, is its Cassandra—uttering calamitous prophecies of imminent techno-apocalypse, unheeded. In his book The Stars in Our Pockets, Axelrod notes that he declines even to make use of a cell phone, and reports a conversation he had on a date with an exasperated woman who tells him “even homeless people have smartphones”:
I didn’t know how to explain that I wasn’t afraid of iPhones but of phone eyes, of having to adjust to yet another way of orienting in the world. I made an overdetermined joke about no one needing a writer to administer a semicolon in an emergency, received no response, then said something about loving airports as a kid, people always arriving from afar, each gate a portal to a different city, the feeling of the whole world brought closer together, but that as much as I still enjoyed passing through airports, I wouldn’t want to feel as though I was living in one.
It’s a beautiful moment, elegantly expressed, Axelrod managing to communicate both the strength of his conviction and the difficulty of persuading others to share it.
The Stars in Our Pockets explores his anxiety over what phones are doing to our brains, and builds on the success of his 2015 memoir The Point of Vanishing, an account of his two-year retreat into the Vermont woods after an accident in which he lost sight in one eye. His partial blinding occasioned a reckoning, a recalibration with the world.
Near the beginning of The Stars in Our Pockets, Axelrod lists some of the more worrying studies he has read of the effects of the digital revolution on human experience and interaction. Online connectivity has diminished our ability to empathize (“down 40 percent in college students over the past twenty years,” according to a meta-analysis in Personality and Social Psychology Review) and has made us intolerant of solitude. (Given a choice of doing nothing for fifteen minutes or giving themselves a mild electroshock, nearly half of the subjects in one experiment chose the distraction, according to an article by Timothy D. Wilson in Science.) For Renouard, to look into the transport options on Google Maps was to have his historical consciousness expanded, but for Axelrod, those same maps are a value system that deceives the user into thinking himself or herself at the center of the world. He quotes studies showing that GPS users have poorer memories and spatial orientation, and recounts the now familiar repercussions of overreliance on satellite navigation: those stories of people who’ve driven off a cliff or into a field, or who set out for a town in Belgium and followed their GPS all the way to Croatia.
On the subject of the deleterious impact of digital connectivity on mental health, he cautions that, although correlation is not causation, “suicide among American girls has doubled in the last fifteen years.” Jean Twenge of San Diego State University published a sobering review of the psychiatric literature last year, finding correlations between smartphone usage among teen girls and young women and declines in measures of their happiness, life satisfaction, and “flourishing.” Twenge also found plenty of evidence to suggest that smartphone usage has something to do with increases in
loneliness, anxiety, depressive symptoms, major depressive episodes in the past year, hospital admissions for self-harm behaviors (nonsuicidal self-injury), suicidal ideation, self-harm and suicide attempts via poisoning, suicide attempts, self-reported suicidal ideation, and in the suicide rate.
The connection was also present, but weaker, for teen boys and young men. It’s worth pointing out that other studies, including a major report in 2019 by the UK’s Royal College of Pediatrics and Child Health, have not found the same correlation, and that in those studies that do show a harmful effect it has been difficult to establish the extent to which social media use is the problem or whether other uses, such as gaming, also contribute.
Twenge proposes a variety of mechanisms for the smartphone effects she cites, most notably the displacement and disruption of face-to-face social interactions, phones’ capacity to interfere with sleep, their facilitation of cyberbullying, and the opportunities they offer to learn about modes of self-harm.
The value systems built into our phones are anything but neutral. Early in the digital revolution a relatively small group of people decided that algorithms would be tailored to drive maximal engagement with social media, regardless of content, that search engines and social media themselves would be almost entirely funded by advertising, and that this advertising would be targeted with the aid of relentless online surveillance. Axelrod’s book reminded me of Radical Attention,a recent book-length essay by Julia Bell, who laments the way that the Internet as we know it is largely the product of one place and one demographic group,
predominantly white men of a certain class and education who lived (and live) in the glare of the Pacific light of Silicon Valley, influenced by the mixture of hippie idealism, Rayndian [sic] libertarianism, and gothic poverty that defines California.
The Internet could have been otherwise, and if Silicon Valley or its competitors can be shaped by a greater diversity of voices, it might yet be.
The Stars in Our Pockets isn’t really a memoir or a polemic but a sequence of meditations on what we risk losing as we offer phones ever more control over our lives. It’s less playful than Renouard’s book, less substantial than Matt and Fernandez’s, but more intimately personal than either. Axelrod describes turning off an app like Twitter and feeling “a queasy sense of leftover energy, of having been tricked into throwing punches in the dark.”
Virtual realities are a poor substitute for human engagement, and almost by way of contrast, Axelrod describes a vivid, transcendent encounter he had with an old flame in the woods around—wait for it—Walden Pond. The couple walked around the water, reflecting on life, their advancing age, the contracting time that each of us has left, and the imperative of choosing wisely what we do with it. Elsewhere Axelrod quotes William James: “Each of us literally chooses, by his ways of attending to things, what sort of a universe he shall appear to himself to inhabit.”
We can’t say we haven’t been warned. When patients come to my clinic for help with alcohol, I explore what their life might be like without that drug, and whether they’re prepared for whatever demons might emerge when they get sober. Since these three books were published, the forced distance of the pandemic has obliged all of us to connect digitally, and now it’s common to see people with a different kind of dependency: strung out with Internet anxiety, sleepless with digital overload, having difficulty distinguishing between what’s real and what’s virtual, and not knowing what their life would be without the Internet. My advice? It has never been more important to make time to find out.
Gavin Francis is a primary care physician in Edinburgh. His latest book, Recovery: The Lost Art of Convalescence, will be published in the UK in January. (September 2021)