Labor of Love

It took a Decider article I read late last night to remind me that ten years ago yesterday, the television series Mad Men broadcast what is arguably the best single episode of its run, “The Suitcase.”

In it, Don Draper and Peggy Olson are working late in the office on a presentation for a client. The late hour is not unusual, but Olson is sacrificing a dinner with family and a new beau, and she’s torn about it: she wants to do this task, with Don and for Don, but she also wants and needs to be at that dinner. She has put off the persistent calls of her boyfriend, who wants her to stop what she’s doing and come on, all the while dealing with Don’s grumpiness at a project that’s not going well. The boyfriend’s annoyance doesn’t sit well with Olson, but she’s caught in this situation, as so many women are. If she doesn’t please the boyfriend, she’ll lose him; if she does please him, she’ll lose the work, and Draper’s respect.

That “respect” is something else that has burrowed its way under Olson’s skin. She has long since recognized Draper’s acumen as Creative Director at Sterling-Cooper, and has complimented him many times, but he has never returned the favor. As the evening progresses, they argue, and Olson lets him know point-blank how little appreciated she feels for the work she does–the long hours, the menial tasks she’s often given. As Draper defends himself, and reminds her that she gets paid for her work, the angrier Olson gets, and she does something I wish to heaven she would not do: she begins to cry. “But you never say ‘thank you’!” she says.

Draper responds with one of the most memorable lines in the history of the show: “That’s what the money is for!”

In a nutshell, we have encapsulated for us in this episode the entire relationship of bosses and workers in twentieth-century America. Too often, bosses think that the checks they sign every week pay for everything. Workers, in their turn, forget that bosses are workers, too. A boss at Ford or Apple or IBM might well forget that a kind word or an appreciative compliment often means as much as a paycheck, but there’s less excuse for a boss at a smaller company like Sterling-Cooper neglecting to praise someone who does consistently good work. By the same token, bosses are bound under the same consequences for success and failure that their fellow employees are. If Sterling-Cooper fails to impress a client, if a company fails to secure a needed contract, the head that will roll is Don Draper’s or some other boss’s, not those who work for him. Getting the business in the first place is Draper’s responsibility, a boss’s responsibility, and it wears on him. In a much-later episode of the series, Draper has cracked; his alcoholism has nearly cost him his position at the firm, and it takes the sharp rebuke of his friend and fellow alcoholic, Freddy Rumsen, to set him straight:

“Do the work, Don,” Rumsen says.

That is the task for all of us. That is the basic human purpose–to do the work, whatever it is. That is what Labor Day honors–the worthiness and value of work. Is there exploitation in work? Is there suffering? Certainly. The greatness of Mad Men lies in the fact that, every week, we see it all: the process of work and how it’s done; the pettiness of interoffice relationships; the cost of success–and failure–when it comes.

But we see something else, as well. When Sigmund Freud was asked what the two most important things in life were, he replied, “Loving and working.” Note that he puts “love” first, and “work” second. We work because we love, not the other way around. We may also show love through our work, as Glen Campbell’s song, “Wichita Lineman,” popular on the radio during Mad Men’s era, reveals, but love is the ground of all that we do. If we cannot show it, if others cannot see it, the work we do has little meaning. “The Suitcase” shows us this truth beautifully.

I mentioned that I wish Peggy Olson hadn’t cried in her complaints to Draper. Her tears are understandable, but not appropriate for the world she’s trying to enter. Yet, they do the trick. Don understands her frustration and acknowledges it, as they begin to drink and relax together, as colleagues. Olson abandons the idea of showing up at that dinner and, as she does so, she emerges from Draper’s shadow for the first time as her own person. She’s always been a cipher for Draper. Her career–the steps she takes, the work she does–was Draper’s career, too, the early years we never got to see on the show. All of the supporting characters are reflections of Draper in some way, but Olson is closest to Draper himself. Yet, now, she is herself. Her tears and her vulnerability allow him to respond in the same way. Late that night and early the next morning, he gets a phone call from the West Coast from the niece of Anna Draper. Anna was the wife of Don Draper, the commander in the Korean War whose identity Dick Whitman stole to survive, desert, and return to the States to live. Anna understood what Dick Whitman did, but loved him anyway. Now, as her niece informs him that Anna has died of cancer, Draper begins to cry and the sound awakens Olson, who’s been sleeping on the couch. Draper laments the loss of the only person in the world who’s ever truly understood him, but Olson puts her hand on his shoulder and reminds him that that isn’t true. They share that moment and then, in true Mad Men fashion, hours later, go back to work, as if nothing had happened, but knowing that something has, something that will enrich all their work to come.


In The Aftermath

Today marks the fifteenth anniversary of Hurricane Katrina striking Louisiana and Mississippi, Aug. 29, 2005–an event I lived through from my apartment in New Orleans East.  In remembrance of that terrible storm, and in tribute to the Louisianians who are now rebuilding the shattered places of Cameron Parish and Lake Charles, Louisiana after Hurricane Laura, I repost below an entry I made five years ago on this date, “The City On A Hill.”

Maggie Galehouse of the Houston Chronicle offers up a brief list of books that explore the impact of Hurricane Katrina along the Gulf Coast.  I would add two others to her list:  Sheri Fink’s Five Days At Memorial, and Rebecca Solnit and Rebecca Snedeker’s Unfathomable City: A New Orleans Atlas, the latter being quite the best general book published on the city in many years.

As the waters receded a decade ago and the authorities began to take stock of exactly what happened in New Orleans, the damage and the fatalities were almost too much to take in.  As my rescuers from Miami-Dade EMS hauled me into the boat and we drifted west down I-10 toward safety, I could see bodies floating in the water around us.  Even then, however, I had no idea of the scale of the tragedy.  I knew it was bad:  the constant  helicopter flights to Slidell, LA north of Lake Ponchartrain over the previous week told me that, and I greatly feared for the survival of that community; but the idea of 1833 people dead was, in those early moments of rescue, beyond my comprehension.

There was talk in the immediate aftermath of the hurricane of abandoning New Orleans, of simply letting it go; there was more talk in the weeks after the storm of not rebuilding parts of the metro area (including New Orleans East) that had been most severely damaged.  My own hope was that the area around Downman Road and Morrison Road and Chef Menteur Highway farther to the east would not be rebuilt, and used instead as a flood plain.  Decades of human-caused coastal erosion have wiped away the natural protection that New Orleans used to receive against every hurricane that approached near the mouth of the Mississippi, and I thought that the city needed every bit of natural help it could get.  My wish was not granted.  New Orleans East has rebounded.  I’m told that the large apartment complex I used to live in has been rebuilt, although I have not been back to see it and will not go back to see it.

Arguments to the contrary, there were good reasons to rescue New Orleans, and rebuild the city.  First among those reasons was the quality of its citizens.  There are many, many good people there–talented, resilient, resourceful, and brave people.  We may be justly faulted for staying through such a destructive storm but, as I said yesterday, the storm itself was no worse than many others we had survived in the past.  Although we knew the risks of a levee failure, no one who stayed behind had absolute knowledge that the levees would fail on this occasion.  The cost of abandoning our homes, our apartments, and our businesses, had we abandoned them, would have been even higher than it was.  It’s very hard to salvage anything if one is not around to salvage it.  I have been grateful every day since the storm that I chose to ride it out in my apartment rather than in the chaos of the Superdome.  I made a tough choice, maybe a poor one, but there were worse choices I could have made.  I did not panic; I stayed calm.  I was certain that, eventually, I would be rescued.  All I had to do was wait.

I cannot justify the behavior of the criminals who ransacked businesses along Canal Street, or that of the police who abandoned their posts, committed crimes themselves, and then tried to cover up their acts.  I do, however, have some sympathy for Police Chief Eddie Compass, a good man who was overwhelmed by appalling circumstances.  I have no sympathy for Mayor Ray Nagin, a man in over his head as mayor from day one, a man who had every opportunity to rise to the occasion as leader of the city but failed miserably to do so.  I voted for him,  but seeing his constant shifting of blame, his grandstanding, and his sickening playing of the race card in his Chocolate City speech, I’m ashamed of having done it.  He almost single-handedly undid all the good that citizens and business owners–black, white, Asian–had done over the previous decade to get a handle on our crime problem, and to employ young workers who previously had been difficult to employ, either because of their criminal pasts or their lack of education and training.  It was those people, the business owners and the persons they employed, people that I knew, that made New Orleans worth saving.

The second reason for saving New Orleans is simply its historical significance.  It’s perfectly all right that St. Augustine, Florida has primacy of place in our country’s history.  The fact remains that there is, and always will be, only one New Orleans; only one city with its precise blend of political history, architecture, food, music, and ethnicity stretching back to 1718.  If America does someday lose New Orleans, we’ll never get it back.  What the city has in the mix of the elements I’ve just named is utterly unique.  That mix can’t be transplanted.  You can try (the Houston Brennan’s is a very good restaurant), but you’ll fail (the gumbo at Houston Brennan’s cannot match what you’ll get at Mr. B’s or K-Paul’s in the French Quarter).

Yet there’s much more at stake.  If you do not preserve New Orleans, if you do not cherish it,  what you’ll lose is the living embodiment, the flesh and blood example, of what Jesus meant when he talked about being “a city on a hill” in the gospel of Matthew.  Every city along the Gulf Coast, from Brownsville to Homestead, Florida, must suffer through hurricanes, but only New Orleans stands as a place and a people that will show you how to endure  a storm and protect those things most worth keeping.  That spirit of survival–the spirit Jesus was really talking about–is learned behavior, year after year, and it is essential to all of us, whether we live along a fault line or in Tornado Alley or right next to the Canadian border in December.  If you want to know how to survive, if you want to know how it’s done, look to the south.  The city on a hill can’t be hid, and New Orleans can’t hide from anyone, despite the constant erosion that threatens its very existence.  Everything that makes the city great and every flaw in the city’s character is on display twenty-four hours a day.  The people who live there know full well when a storm is coming, but that is no matter to them.  A storm is always coming.  What matters to them is to live, even as the storm does what it will.


For National Book Lovers’ Day

In observance of National Book Lovers’ Day, I offer a repost of “Books I Have Loved the Most,” originally written back on June 7, 2014:

What follows is a short list of the books that have stayed with me in heart and mind over my life.  It is in no sense a list of the “best” books I’ve ever read, because the term “best” can carry–and should carry–a multitude of meanings.  That is, a book can be a “best” book because it’s a fine example of a genre; it can be best because its prose is well-written; it can be best because we treasure its beginning or its ending, or because we are impressed by a character or two.

But some books leave us with an impression of their whole, even if we forget some of the details over time.  We may–or may not–return to them now and again for sustenance that is both intellectual and emotional.  Even if we haven’t touched them in years, though, the books I’m writing about are part of our mental world, and we are happy to think about them.

5. Paradise Lost–We think of John Milton as an epic poet, a master of the large scale, and he is; but my love for him grew when I discovered him to be a genius of the small scale, as well.  Paradise Lost is filled with tiny, subtle moments that reveal Adam and Eve as fully human people.  The verse of the poem is packed with linguistic echoes of all the verses that have come before.  Those echoes are reminders that we have been here and here in the action, but they also remind us at times that the fall of humanity is as much a psychological calamity as it is a physical one.  Take, for instance, Eve’s decision to separate herself from Adam in Book IX, despite the couple’s full knowledge that Satan is out and about, and could attack either one of them.  Eve says,

“The willinger I go, nor much expect / A Foe so proud will first the weaker seek; / So bent, the more shall shame him his repulse. / Thus saying, from her Husband’s hand her hand / Soft she withdrew” (ll. 382-385).   Everyone observes Eve’s verbal departure from Adam, her statement of doubt that Satan would attack her because he would be too embarrassed to be repulsed by the “weaker” of the couple in the Garden; but notice, also, the poet’s syntax:  Milton provides no end punctuation in “from her Husband’s hand her hand.”  Until this moment, Adam’s hand has been her hand, too; and the line break occurs when Eve withdraws from Adam to go her own way.  They are together; then they are not.  Linguistically, when the Fall happens, they become a couple we can all recognize, engaging in the greatest morning-after argument in the history of the world.  They still appear to be in the Garden, but in reality, they are somewhere else.  Listen to Adam later in Book IX:

“Is this the Love, is this the recompense / Of mine to thee, ingrateful Eve” (ll. 1162-1163).  If we’ve been reading the entire poem, we might remember what Satan says as he looks upon the prospect of Hell in Book I, ll. 242-245:  “Is this the Region, this the Soil, the Clime, / . . . this the seat /  That we must change for Heav’n?”  The rhythm of the two speeches echo each other.  Adam in his bitterness even briefly embodies and echoes God his creator, who, in Book III, had called Man an “ingrate,” sufficient to withstand temptation, but unwilling to.  Yet, this last echo sounds hollow, because Adam is no longer where God had placed him.  He’s not in the Garden; he is in Hell.  It was discovering such moments as these, moments and echoes that knit together all that Milton ever learned about the world and the humanity in it that turned me into a lover of his finest poem.  Many of us today might reject his work as sexist, but I remind you that it is Eve who steps forward to accept the redemption that God offers.  It is she who helps them become a couple worth saving.

4. Sister Carrie–Theodore Dreiser’s turn -of-the-twentieth-century novel about the rise of Caroline Meeber, an actress, and the fall of her seducer, Charles Drouet, still amazes me.  Dreiser really can’t write worth a damn.  His sentences pile up in a rush; his dialogue is choppy and wooden; his descriptions of place and character are flat.  And yet, by the end of the book, we know these characters and their fates in the most intimate way.  It’s an astonishing performance.  I can’t say that I have discovered the secret of Dreiser’s success, but my guess is that it is in Dreiser’s selection of the details that he piles upon us.  Whatever the secret may be, his flaws as a writer did not deter me from admiring his depiction of two lost souls making their way, and losing their way, through an uncaring world.

3. Jane Eyre–One of my professors in college tried to talk me into liking Wuthering Heights instead, but I would have none of it then, and I’ll have none of it now.  If you prefer the tormented love of Cathy and Heathcliff, go right ahead, but they always struck me as the typical bickering couple trying to “win the argument” even if it means destroying each other.  The fight really worth having is the one Jane has with Rochester.  She battles him lovingly for the possession of her heart and mind, not so that she can “win the argument,” but so that she can live with him the only way she can: as his equal.  Both Wuthering Heights and Jane Eyre are regarded as feminist works.  Jane Eyre actually is one.

2. The Godfather–Francis Ford Coppola’s superb adaptations of Mario Puzo’s novel make us forget just how much of Puzo’s work the two of them left on the cutting room floor.  Gone are the inset stories of Lucy Mancini, the bridesmaid that Sonny Corleone took as his lover at Connie’s wedding; and of Luca Brasi, who is not a slow-witted assassin but a man passionately in love with an Irish woman; and of Al Neri, who does not, in the book, merely appear as Michael Corleone’s bodyguard, but is recruited by and welcomed into the Family after being drummed out of the police for using excessive force on a suspect.  Tightening the story that way keeps our focus on Vito and Michael Corleone, but we lose some of the depth that makes the book so interesting.  The Godfather is the most vivid novel I’ve ever read.  I’ve read it only once, in 1975, as I recall, and I may never have to read it again.  It has done its work.  I see Vito Corleone in his study, a deeper and more subtle thinker than Brando’s character; I see Michael and Apollonia together, and wish that their love could have survived; I see Michael standing in his father’s study, his transformation complete, as the heads of the capo regimes gather around him, and the door closes, leaving Kay with only the choice to  pray in church for his soul.  The Godfather succeeds in part because Puzo closes off the outside world the same way he closes that door.  We never see the honest cops doing battle with the Mafia.  We never see the hundreds and thousands of lives ruined by the gambling, prostitution, and drugs the Mafia offers us.  What we are given instead is a world unto itself, “a little world, made cunningly,” in Donne’s phrase.  It is a world repellent, but one that has its own code, its own morality.  We can’t help looking at it because it’s a world in which every detail is clear.  It’s a world we can see more clearly than this one.

1. The Lord of the Rings–When I was in high school, my friends came into history class every day talking about “Frodo and Sam.”  I had no idea who Frodo and Sam were.  Even when I pulled a copy of The Lord of the Rings from the shelf of my church’s library, I still despaired of finding out; the opening pages were just too slow.  Finally, a couple of months later, on a summer day, I made a third attempt.  This time, the chapter, “A Short Cut To Mushrooms” drew me in, and I began to read.  As with The Godfather, Peter Jackson’s movie adaptations are superb.  But like The Godfather, the movies can lead us away from the deeper, better source material of the original book.  Middle-Earth is our Earth.  Tolkien began to reimagine it during and after World War I by drawing upon the fairy stories not only of England but of Europe, as well.  His genius was not only linguistic.  It was also emotional and structural.  He had seen and felt the destruction of the world first hand, and the violence of those Nordic and Germanic tales, but he was paralyzed by neither.  He also realized that the world of the tales he was writing in 1914 and the world of the hobbits was the same world, a world of long ago, a world that was fading.  The result of Tolkien’s recognition of what he is doing is sheer brilliance.  Even so sharp a mind as Edmund Wilson’s couldn’t grasp it in his 1956 review, “Ooh, Those Awful Orcs.”  But the critical tide turned by 1970, and we are the better for it.

Not until after I had read the books the first time did I realize how akin Tolkien’s novel is to Paradise Lost.  Tolkien’s work is, like Milton’s, “the story of all things.”  Tolkien’s tale, like Milton’s, is ultimately about the dominance of humanity in the world, and the elves, dwarves, and hobbits leaving their earthly existence behind.  Unless you read The Lord of the Rings, you won’t necessarily know this.  Yes, Legolas is a warrior in the movies; but Tolkien also makes him a poet of a fading, glorious past.  Galadriel is more than just a beautiful queen: she has slowed down time in order to perpetuate Lothlorien’s existence.  She knows her efforts won’t be enough.  In that brilliant scene wherein she resists the temptation to take the Ring from Frodo even as he asks her to take it, she does what the men around her cannot do.  But she knows that even if she and Elvenkind survive the onslaught of Sauron’s armies, the kingdom they have built will not last; it will be changed beyond recognition.  That is what war does.  It changes us and it changes the places where we live.  In fighting, we give up our future, as Frodo and the Elves do, so that others who come after us might have a future of their own.  The world in which those others will live is not quite the beautiful world we’ve fought for and lost, but those for whom we saved a future, like Sam Gamgee, Aragorn, Merry, and Pippin, will live in it lovingly with their wives and children, sadder but wiser than they were.


The Present in the Past

This, from Carl Sagan’s The Demon-Haunted World (1996 ed.), Chapter Two, “Science and Hope,” p. 50 in the Kindle text:

“Science is more than a body of knowledge; it is a way of thinking. I have a foreboding of an America in my children’s or grandchildren’s time–when the United States is a service and information economy; when nearly all the key manufacturing industries have slipped away to other countries; when awesome technological powers are in the hands of a very few, and no one representing the public interest can even grasp the issues; when the people have lost the ability to set their own agendas or knowledgeably question those in authority; when, clutching our crystals and nervously consulting our horoscopes, our critical faculties in decline, unable to distinguish between what feels good and what’s true, we slide, almost without noticing back into superstition and darkness.”

In fewer than one hundred words, Sagan looks ahead twenty-five years and sees, dimly yet unmistakably, as all prophets do, the present day:  the lifeblood of much of our economy; the outsourcing of much of our work; the concentration of our technology in the hands of Gates, Cook, Dorsey, and Zuckerberg; the civic illiteracy of both the public and our elected officials; the steep, horrifying decline in the quality of our educational system and the substitution of a public ethos that sways or yields at every moment to whatever group happens to feel aggrieved; and our daily reliance on televised opinion and gossip for news because the press is no longer reliable.

This is the country we are living in.

The problems Sagan saw cross every political divide and every socioeconomic group.  Everyone here has a stake in solving them.  We cannot solve them overnight, but we can begin to solve them by exercising our right to vote in November.  Part of the “civic illiteracy” to which I have referred is the dismaying unwillingness of much of our eligible electorate to get out and vote when the opportunity arises. That reluctance has nothing to do with cynicism or fear of mail-in ballots or anything other than simple laziness.

The laziness has to end and, pandemic or not, it has to end this year.  The stakes are too high, the issues too important, for anything less than 70% of eligible voter turnout.  If we get at least this much or more, we will see a decisive victory for one party or another, not only in the executive branch but in both houses of Congress.  We need such a decisive victory, because the United States cannot endure forever with the rough 50-50 split under which we presently live.  There will still be division and opposition in Congress no matter who wins the White House, but Congress is designed to deal with that division.

What America is not designed to deal with is the nearly-even split we have between those who favor capitalism and conservative economics and those who wish to push the country leftward toward socialism.  The left wing of the Democratic Party has been pushing in that direction, of course, since the 1960s and before, but it’s taken them this long to merely eliminate the centrist Democrats who had been holding them in check.  The Democrats, both in leadership and in the rank-and-file, are almost wholly socialist now, which makes the choice between them and the Republicans at least stark, if not easy, in a few months.  At the moment, however, the divisions between us have allowed minority groups to have a much greater hand in setting social policy for the entire country.  I am not certain at all that this development is a good thing, given the social unrest we have seen over the spring and summer, but we have an opportunity to achieve genuine clarity in the fall.

I hope we take advantage of it.


Doing As The Romans Did

In my last post, I mentioned the book SPQR, Mary Beard’s survey of the early centuries of Rome’s development.  In chapter ten of that book, “Fourteen Emperors,” she discusses the assassination of Gaius Caligula in 41 CE, and the hasty alterations of sculptures in progress that were quickly changed to favor the likeness of the new emperor, Claudius.  On pages 397-98 of the Kindle edition, she writes the following passage, describing, in her second paragraph, social violence we can recognize:

“Claudius may have had a better and far more bookish posthumous reputation than Gaius; for it was not so obviously in the interests of his adopted son and successor, Nero, to damn his memory.  But scratch the surface, and he too [i.e., Claudius] has a grim record of cruelty and criminality (35 senators, out of a total of about 600, and 300 equestrians put to death during his rule, according to one ancient tally), and he filled the same slot in the Roman power structure.

That is one message of the recarving of the portraits of the old emperor. Economic good sense must in part have driven the clever alterations. Any sculptor who had nearly finished a head of Gaius’s in January 41 CE would not have wanted to see his time and money wasted with a useless portrait of a deposed ruler; far better to recast it quickly into the likeness of the new man on the throne. Some of the changes may also have been a form of symbolic elimination. Romans often tried to strike from the record those who had fallen from favor, demolishing their houses, pulling down their statues and erasing their names from public inscriptions (often with crude chisel marks, which serve mainly to draw attention to the names they wanted forgotten). But another underlying point, much like the message of Augustus and the ravens, is that emperors were more similar to one another than they were different, and it took only some superficial adjustments to turn one into the next. Assassinations were minor interruptions to the grander narrative of imperial rule.”


Also this week, in light of the ongoing protests, which have now challenged the legitimacy of even the Washington Monument and the Lincoln Memorial, I would like to draw your attention to an essay on Lincoln I wrote four years ago.  It is a long read, but I believe you will find it worth your time.



The Great Turning Away

David Kaiser explains why contemporary students have turned away from studying history, and why universities have largely stopped teaching survey courses in the subject.  All of the trends he discusses were outcomes of the sociopolitical movements throughout the Western world in the 1960s, and they were strikingly present during the years of my doctoral study at the University of Illinois at Urbana-Champaign from 1984 to 1991.

I suspect most readers will latch on to the statistics that Kaiser cites in his first paragraph about the startling drop in the number of history majors during the years of his career.  They are telling but, to me, a fact no less significant is this one:  as a result of the American Historical Association’s emphasis on gender issues in various forms and the political activities of the Left and the Right in the United States, we have, according to Kaiser, “practically no serious studies of US political and diplomatic history since 1980 or so today.”

Those forty years, from 1980 to 2020, unaccounted for and unstudied, amount to the development of the world we know:  the fall of the Soviet Union; the attempted unification of Europe; the development of machine culture for personal and political uses; the decline in the use of large military forces to achieve solutions to problems; the rise of terrorism as a means of compelling change.

The British historian J.M. Roberts had already noticed the movement away from broad studies of history toward smaller, more specialized works in the 1980 introduction to his one-volume Penguin History of the World.  He was jovial about the trend, saying that historians ought to be allowed to write on subjects that interested them.

There’s merit in Roberts’ view, of course, but the consequence of not teaching the survey courses, and not writing the broader books, such as Jill Lepore’s These Truths, about American history, or Mary Beard’s SPQR, about Roman history, is that we have raised a generation of fully-adult human beings who know nothing of the past out of which they’ve come.

They do not know, for instance, that the Soviet Union constructed the Berlin Wall not to defend against attacks from the West that never came, but to keep East Berliners from escaping the city.  In American race relations, many do not know about the Tulsa massacre of 1921, or the Watts riots of 1965, or the civil unrest in Philadelphia in the 1980s.  They may have heard about the protests after Rodney King’s arrest in Los Angeles in 1992, and it is certainly possible that they have seen the film of Michael Brown’s arrest in Ferguson, Missouri, Freddie Gray’s death Baltimore, and George Floyd’s death in Minneapolis.  Yet without being able to fit those events into a larger historical context (and being taught what that larger historical context is),  the narrow, specialized polemics of contemporary historians in the classroom amount to little more than the indoctrination of students into a belief system whose roots they do not know, and the potential that the present moment of civil unrest has for moving society forward will be completely lost.

What has happened on the streets of Minneapolis and New York City and Los Angeles and Seattle–the deaths, the protests, the riots–is deeply tragic, all the more so because the loss of lives and property could have been prevented.  But the events are not new.  Those who know our history have seen them before, many times.  And, sad to say, the responses to George Floyd’s death aren’t exactly new, either.  We had dialogues among ourselves after the Watts riots, too, and calls to disband the police.  Federal funds poured into LA for months afterward back then, as they did in Missouri after Michael Brown’s death and in Baltimore after Freddie Gray’s death.  The occupiers of the Autonomous Zone in Seattle, although that area has since been renamed in an effort to find a unifying purpose for being there, have nonetheless taken their playbook straight from the New Left’s occupation of administrative offices of colleges all over the country during the Vietnam War of the 1960s and 70s.  Those sit-ins sometimes lasted for weeks.  This one may last for months.  The Seattle mayor, herself a product of the New Left, when asked how long she expects the occupiers to stay, responded that we may all see “another summer of love” (1967).  She was being ironic, but she wasn’t kidding.

We’ve had utopian communities by the dozen set up here and there in the United States since the nineteenth century.  All of them faded because they couldn’t support themselves, either materially or philosophically.  Enthusiasm for the ideals waned; few joined the cause after the first wave of excitement.  The same fate will probably occur to the squatters on Capitol Hill. But if it does not, if they are able to establish and sustain themselves as a separate entity, that development will not be new, either:  the Republic of Texas existed within the contiguous United States between 1836 and 1846; and the territory of the Louisiana Purchase existed as the possession of Spain and France for 150 years before that.

The most interesting question to me is, where (and to whom) will the federal money go this time?  History is the record of the events and the artifacts which shape our lives.  American history shows that millions upon millions of dollars have been spent not only to arm police departments but also to rebuild neighborhoods and businesses after they’ve been torn down in civil unrest.  Yet, after all those millions spent, after a century of repeatedly repairing such damage whenever it occurs, poverty largely remains in many of those areas.  So does the distrust we have for each other.  Why is this so?  The answer likely does not involve only money.  If it did, we would have solved many of our problems years ago.  The answer, however complicated it may be, is more likely to be found in a broader, deeper understanding of our country’s history, an understanding that we have willfully shunned today, at the very moment we need it most.



Darkness Four Years On

Twitter is celebrating the four year anniversary of Batman vs. Superman: Dawn of Justice today.  I will celebrate it, as well, by pointing you back to the review of the movie and its context which I wrote upon the movie’s release in 2016.

The only additions I would make to my original remarks are that the Extended Edition of  Batman vs. Superman is a far better, more coherent film than the theatrical release, particularly in that it shows the depth of Lex Luthor’s plotting and evil; and that Gal Gadot’s Wonder Woman, introduced here and released the following year, was and still is a complete delight to watch.


Charles Portis

Charles Portis, the author of the truly wonderful novel, True Grit, has died at the age of eighty-six.  Twitter has a gallery of tributes, all of which are worth reading, and some of which will point you to his other fiction.

True Grit begat rare children: two superior movies: one in 1969 with John Wayne and Kim Darby; the other, in 2010, with Jeff Bridges and Hailee Steinfeld.  The original source material, however, is unmatched, and will always stand on its own as an example of how prose can be tough, direct, and lyrical at the same time.


The Old Man and the Sky

No one is quite certain what Martin Scorsese meant when he said a couple of weeks back, referring to the experience of seeing a Marvel film, “It’s not cinema.”  That is because no one is quite certain what Scorsese’s definition of “cinema” is.  At first glance, it may have been easy to take his remark as simply a gratuitous swipe at the world’s most popular movies in advance of the release of his own, latest film, The Irishman; and that’s the way I took it.  Robert Downey, Jr. handled Scorsese’s comment with grace on The Howard Stern Show, mostly by sidestepping Scorsese’s words entirely, and reminding all of us that the 76-year-old Queens native is, arguably, America’s greatest living film director.

On the other hand, Scorsese’s follow-up comments didn’t clarify his definition of cinema much at all, and the status most of us grant him as America’s finest living director may be challenged by some.  (Coppola–a friend of Scorsese’s–and Spielberg come to mind.)  In any case, only the most ardent Scorsese fan would be willing to let his remarks go by unquestioned, and thereby lose an opportunity to express an idea of what cinema may be.

To begin with, let me disagree with the notion that the Marvel films from 2008’s Iron Man through 2019’s Avengers: Endgame are not cinema.  They most assuredly are.  The films tell a story, a complicated one, over twenty-two separate installments, with hints of characters and events to come spread out all along the way.  Taken together, the Marvel films represent quite an accomplishment in writing, acting, and special effects, a far greater accomplishment than, say, the old Saturday-morning serials were, or even the Marvel films’ immediate ancestors, the Star Wars films.

If one objects by saying, “But the Marvel characters aren’t real,” I would agree; they’re not–at least not when they’re flying around or leaping around as superheroes.  But they’re only superheroes half the time.  The rest of the time, they’re fictional people, wrestling with the same problems we all do–illness, infirmity, anxiety,  lost love, guilt, and the heaviest possible sense of responsibility toward the extraordinary powers they’ve been given or have created for themselves.  They deal with these problems within a moral framework clearly set forth for us in Captain America: Civil War.  Do men and women with such remarkable abilities have the inherent freedom to act unilaterally, or ought they subject their talents to the service of the state?  This particular Marvel film is, in my view, one of the weaker ones of the series because it does not answer the question just posed, or even hint at a direction from which an answer might come; yet, it was not lost upon me as I watched it in the theater that Civil War frames for us the very real and very fierce struggle this country is now having between those who favor individual liberties and those who favor socialism, and the rule of the state.  If that is not a relevant topic for the cinema, I do not know what else would be.

Scorsese objects to the Marvel movies in part because they turn the theater into a kind of amusement park.  Perhaps the theaters in Queens are like that, but every Marvel film I’ve ever attended has been watched by both children and adults, all of whom have been well-behaved.  But if some theaters are amusement parks for the run of a Marvel film, what of it?  Scorsese knows that spectacle has been at the very heart of cinema since 1902’s A Trip to the Moon by Georges Milies and the later silent films of Cecil B. DeMille, including that director’s first crack at The Ten Commandments (1923).  Indeed, I would claim that to experience spectacle is why any of us go to movies in the first place.  A film like Metropolis (1927) or The Seventh Seal (1950) may offer us something more–exploration of an idea, or a glimpse at how people in the past may have behaved–but the desire for spectacle is the reason we go.  Theaters have always been, in one way or another, amusement parks.

The notion of spectacle at the heart of cinema may be distressing to a man of Scorsese’s talent and aims, but it need not be.  By spectacle, I do not mean the ancient bread-and-circuses, keep-the-people-amused exhibitions of lions and slaves in the Coliseum which developed (and doubtless scarred) the Roman psyche for hundreds of years.  I mean, rather, spectacle as part of the larger human purpose of play.  Human beings, both children and adults, must play.  As the historians Johan Huizinga and Phillipe Aries have traced out the behavior for us, play is essential for creativity, and the tools of the filmmaker are the tools of play.  For a long time, humans in Western culture did not play.  Children were treated as little adults, and societies were the worse for it.  But we have allowed children to play for the last five hundred years or so, and the general result has been an explosion for the better in the expression of the human imagination, and the solving of problems.

I might, by the by, suggest that these last remarks constitute a response to the judgments of filmmaker Terry Gilliam, who objects to the modern tendency of movies to offer viewers solutions to our problems and comfort to our souls.  He would prefer that movies simply ask the best questions possible, and leave solutions and comfort out of it.  Based on my own experience, Gilliam’s objections are an academic’s response.  Academics prize above all the asking of good questions, because if one asks a good question, she’s part way to finding a good answer.  For various reasons, however, some having to do with the desire to maintain one’s employment, others having to do with the solutions to problems being difficult, the asking of questions has become an end in itself, to the great detriment of academia and Western society.  A question, properly formed, is a means to an end.  It is not higher or better than the end which is sought.  We may ask a question in wonder, of course; but we ask a question mostly to push ourselves toward a solution to our problems.  Had humans contented ourselves with just asking questions without applying solutions, we’d still be living the lives we had five centuries ago, and our children would still be little adults.

In a way, I’m deeply glad for the Marvel movies.  They have demonstrated, once and for all, that a genre film–or a set of them–can be hugely successful.  We forget, these days, just how difficult it was for the genre of science fiction to get a foothold within the popular imagination.  The 1950s are dotted with classics:  The Day the Earth Stood Still (1953); Forbidden Planet (1956); and The Incredible Shrinking Man (1957); but, even so, one still can’t help hearing Patricia Neal giggle all the way through production of The Day; and if a viewer winces at some of the dialogue in Forbidden Planet, who could blame him?  And who has not found it just a little hard to suspend his disbelief at the idea of a shrinking man?  Scorsese has built an entire career out of making genre films (with occasional brilliant forays outside it, such as The Last Temptation of Christ in 1988), so perhaps some of his discontent with the Marvel films has to do with a narrowing of what a genre film can be. I hesitate to call any of Scorsese’s movies–Mean StreetsTaxi DriverGoodfellasThe Departedfilm noir.  They aren’t filmed that way, and an attempt to compare Scorsese’s films to the classics of film noir (like Out of the Past or Detour or Double Indemnity) simply wouldn’t work.  But the gangster film is a genre of its own, and Scorsese is a master of it.

I have to ask, though, are any of Scorsese’s movies “cinema” in the sense that he means it?  His movies deal with crime and punishment, retribution, and guilt, but I gotta tell you, I’m not thinking about any of those themes at the end of his movies the way I am thinking about the guilt Michael Corleone is feeling as he sits alone in the boathouse at the end of The Godfather II (1974).  That isolation, that despair, is not something any of us should envy or wish upon another, and it is a fitting punishment for Michael’s destruction of the five families and the murder of his own brother.  When I think of The Godfather II, I think of it as the last great film of the 1950s, the last of the film noir, and the last of the great studio films in the old Hollywood style.  By comparison, Scorsese’s films are slick, often gritty and involving, but ultimately trapped within their genre.  They are gangster films, but little more.  Watching them, I imbibe the sense of a bookish little boy from Queens getting his multi-million dollar revenge on all the tough guys who pushed him around in school.  He wanted to be like them, but couldn’t.  The best he could do was watch them, and mimic them.  To his credit, Scorsese is one of the best mimes around, but, sadly, to find a full and complete work of his art, I often have to go outside it, to a film that doesn’t explore the New York world he knew so well, to a film that doesn’t use the cinematic shorthand his genre audiences carry with them into the theater.  I have to go to The Last Temptation of Christ or The Age of Innocence (1993).

If by cinema Scorsese means, in part, a shared film experience, even he knows we’re far past that day, now.  For most us, watching a movie in a theater is a solitary experience, even if we are with someone, or with a group of friends.  The last cinematic experience I had in Scorsese’s sense was watching American Hustle (2013) with a large group of strangers at a cineplex.  None of us knew what to expect; the film had barely been advertised in the paper or on TV.  But as we watched, every single one of us was delighted by the blend of comedy, drama, farce, satire, and philosophy we were watching.  The Abscam scandal happened during my high school years in the 1970s, so I was familiar with the actual events when they occurred, and the irony of using con artists to catch con artists was not lost on me, even back then.  But to have it brought back so forcefully, with such sympathy for the principals, despite their folly and pain, was quite an experience.  At the end of the film, an amazing thing happened:  we stayed in our seats and talked to each other about what we had seen.  Some of us were deeply impressed by the script; others by the period accuracy of the costuming and scenery; still others mentioned the ethical dilemmas the whole affair raised.  We left, having resolved nothing, but we knew we’d been supremely entertained and even enlightened about how resilient human beings can be even under the most trying of circumstances.

That kind of experience is rare, and it will grow rarer still as video streaming continues to increase in both quality and its number of subscribers.  Some of the best movies I’ve ever seen–Ran (1986), The Seventh Seal (1950), Rust and Bone (2012), Dark City (1998), and Sunrise (1927) I saw alone, and at home, and I am the better for it.  There is no substitute for the training of one’s own mind upon a film without the interference of others or the distractions of a strange environment.  We find the communal experience helpful at times, but if it is dying away at the expense of films with mass popularity, that does not mean we are witnessing the death of film itself.

If Scorsese means by cinema some kind of shared communal experience, we may be losing that, and we may be losing as well the authority of the director as part of that experience, too.  The age of the director began in the 1960s in Europe, and it has lasted until the present day.  If this age is fading, it wouldn’t be the first time power has shifted in cinema or in Hollywood.  By the time the late 1950s rolled around, DeMille, who had ruled Hollywood spectacle for forty years, had become an old, cantankerous man.  One of the young lions of the Directors Guild finally stood up to him during a meeting one day:  “Mr. DeMille,” he said, “you’re a great director and you’ve made some great films, but we don’t like your politics.”  Such sentiments signaled a long, leftward shift in the politics of movie making, but nothing lasts forever.

If Scorsese is lamenting, as I think he is, the loss of films that explore how we live and how we should live, then Hollywood has only itself to blame.  The shared crucible of World War II gave a lot of writers and directors and actors the moral strength to write a lot of socially-conscious and morally-persuasive films like the The Best Years of Our Lives (1946), Gentleman’s Agreement (1947), Crossfire (1947), and The Blackboard Jungle (1955).  But that same Hollywood was, at the same time, covering up crime, drug addiction, and even the sexual orientation of some of its major stars and had been doing it for years.  Today, we’ve witnessed the spectacle of Hollywood turning on itself, as it did in the McCarthy Era, accusing each other of crimes–sometimes justly, sometimes not.  While absolutely no one can claim personal perfection, such activity has to be eroding the moral base upon which every piece of art has to stand.  I wonder if, in flocking to see the Marvel films in such numbers and letting those costumed, fully-fictional heroes elaborate the arguments of our time for us, we have not validated the message that the Age of the Moral Film is now over, as well?  Is it not possible that, without a shared moral base, Hollywood has lost the moral authority it takes to tell the serious cinematic tale Scorsese would like to see?  The last such film I saw was Three Billboards Outside Ebbing, Missouri (2017), a good show, with complex characters, but even so fine an actress as Frances McDormand couldn’t quite make me believe that firebombing a police station was fully justified by the rape and murder of her daughter, or that going after a man who did not do that crime would somehow ease her pain.  As problematic as the film is (and it intends to raise problems for its audience), it, too, is a rare film these days, and it might become even more so.

Perhaps Scorsese is just a cranky old man for making all of us think of these things.  Or perhaps he truly does think valuable elements of the movie-going experience are being lost and he wanted to warn us.  Either way, an old man’s gotta be an old man; the sea’s gotta be the sea; the sky is bright enough some days, ya gotta shake your fist at it.  We do lose things of value now and again, without ever realizing we lost them.  We do lose our moral compass from time to time, even the best of us.  To cry out against the vulgarity–that is, the commonness–of the age in which one lives is something we’ve been doing for a very long time.  The cry is not always evidence of a sour spirit, but is instead a shout in the direction where something better resides, or used to.  That’s a valuable service for anyone of any age to perform for us.  The reminder of the glory of what was beckons us to see it again, and it is often the first step in creating what is new and fresh and vital to us now.


In A Different Voice

In my last post, I mentioned the versatility of Tom Hanks.  It’s worth pointing out that that versatility is often subtly expressed, and might be missed even by those who are looking for it.  A case in point is Hanks’ performance as Capt. John Miller in Saving Private Ryan.  Among the millions who have seen that film, there’s a large subset of thousands of viewers who cannot get past the accurate but appalling depiction of the D-Day landing.

Within those opening scenes, Capt. Miller’s commands to his men before the landing and after are expressed in crisp, direct language.  Even amid the whizzing of bullets, the spray of blood, the boom of heavy guns, and the screams of the dying, his orders are impossible to misunderstand.

It was not always so.  In an early draft of the screenplay, writer Robert Rodat made Capt. Miller more chummy and familiar with his men than in the final version of the story we see onscreen.  Later drafts pare away the briefly-lighthearted conversations Miller has with them on the Higgins boat and on the beach, and leave us with the focused, clear-headed captain who never forgets the objective of the D-Day landing.

Miller is so focused, in fact, that his men think he’s a machine, assembled out of various body parts at Officer Candidate School.  Some of us might be inclined to agree with them, if we remember the exchange Miller has with the Colonel (Dennis Farina) on Omaha Beach three days after the landing.  Again, in the early draft, the language is not what we expect.  Miller is simply asked to “Report.”  He replies that sector four is now secure, but with casualties, courtesy of the German Wehrmacht.  “They just didn’t want to give up those one-fifty-fives, sir.”  Miller’s words here are changed in the final draft to “eighty-eights,” giving us a punchier two-syllable summation of why the Germans died.

But Rodat’s final draft of Miller’s report goes further.  “We took out gun emplacements here and here and here,” Miller says, pointing to the Colonel’s map, “but the whole area turned into a mixed, high-density field–mines all over the place, including small ones our detectors can’t pick up–”  and suddenly, Miller appears to us in a different form, speaking in a different voice.  He is, with the Colonel, not just a captain, but a battle analyst and a tactician.  He’s the same man–the one whose hand shakes with fear–but we see and hear that Miller’s men have an absolutely correct intuition about him:  there’s more to him–far more–than appears on the surface.  In a different element, a different set of circumstances, any of us might behave far differently, speak in a different tongue, live differently, from the way we do now.

What that scene teaches us as writers is that it is necessary to allow the characters we create to speak in a different voice and act in a different way when the situation calls for such change.  We all want to create consistent characters–people who sound like themselves from one speech to the next, and we wince when they don’t.  That’s what Mark Twain was complaining about in “Fenimore Cooper’s Literary Offenses,” and his complaints against Cooper (a noteworthy novelist) have some merit, but we all speak in different voices each day, and the characters in our writing should, too.

Something else is revealed in the early draft of Saving Private Ryan that’s worth mentioning.  Capt. Miller is, at the time of his conversation with the colonel, already a Congressional Medal of Honor winner.  He uses that status to question the orders he’s just been given to find James Ryan and bring him home:

“Respectfully, sir, sending men all the way up to Ramelle to save one private doesn’t make a fucking, goddamned bit of sense.”

We don’t know what Miller had done before in combat to merit a Medal of Honor, but Rodat crucially and wisely drops this exchange and all mention of Miller’s prior exploits, so that the mission he’s just been given will take center stage.  In the final draft, Miller, as we know, does object to the mission, but he couches his objections in ironic language as he walks with his men through the fields of France in the rain:  “I’d say, ‘Why, yes, sir, that’s a fine use of resources, and I’m sure saving Private James Ryan is an objective of great military importance.”  They all smile grimly at Miller’s meaning.  By cutting the early objection and bringing it up subtly here, Rodat has cleared the way for the major theme that dominates the last half of the picture:  the theme that may be expressed by the words, “Earn this.”  By deciding to stay on the bridge with Ryan and his fellow soldiers, Miller and his men hope to earn the right to go back home. Those words–“Earn this“–build up in scene after scene, and by the time the dying Miller whispers them in Ryan’s ear, they carry a lifetime responsibility, a terrible weight, that none of us could bear, whether we were actually there on that bridge in 1944, or simply watching the scene being played out in a movie theater in 1998.

Yet, how could any of us, watching Miller’s final moments, do anything other than try to carry out Miller’s last command?  I have no idea what Miller did before D-Day to earn the highest award our nation can bestow upon a soldier.  But because of Rodat’s brilliant decision, I know beyond all argument that what he did at Ramelle with his men merits that award, and that all of us have to, in some way, earn the lives we’ve been given because of their sacrifice and the sacrifice of the thousands of real soldiers just like them.