Doing As The Romans Did

In my last post, I mentioned the book SPQR, Mary Beard’s survey of the early centuries of Rome’s development.  In chapter ten of that book, “Fourteen Emperors,” she discusses the assassination of Gaius Caligula in 41 CE, and the hasty alterations of sculptures in progress that were quickly changed to favor the likeness of the new emperor, Claudius.  On pages 397-98 of the Kindle edition, she writes the following passage, describing, in her second paragraph, social violence we can recognize:

“Claudius may have had a better and far more bookish posthumous reputation than Gaius; for it was not so obviously in the interests of his adopted son and successor, Nero, to damn his memory.  But scratch the surface, and he too [i.e., Claudius] has a grim record of cruelty and criminality (35 senators, out of a total of about 600, and 300 equestrians put to death during his rule, according to one ancient tally), and he filled the same slot in the Roman power structure.

That is one message of the recarving of the portraits of the old emperor. Economic good sense must in part have driven the clever alterations. Any sculptor who had nearly finished a head of Gaius’s in January 41 CE would not have wanted to see his time and money wasted with a useless portrait of a deposed ruler; far better to recast it quickly into the likeness of the new man on the throne. Some of the changes may also have been a form of symbolic elimination. Romans often tried to strike from the record those who had fallen from favor, demolishing their houses, pulling down their statues and erasing their names from public inscriptions (often with crude chisel marks, which serve mainly to draw attention to the names they wanted forgotten). But another underlying point, much like the message of Augustus and the ravens, is that emperors were more similar to one another than they were different, and it took only some superficial adjustments to turn one into the next. Assassinations were minor interruptions to the grander narrative of imperial rule.”


Also this week, in light of the ongoing protests, which have now challenged the legitimacy of even the Washington Monument and the Lincoln Memorial, I would like to draw your attention to an essay on Lincoln I wrote four years ago.  It is a long read, but I believe you will find it worth your time.



The Great Turning Away

David Kaiser explains why contemporary students have turned away from studying history, and why universities have largely stopped teaching survey courses in the subject.  All of the trends he discusses were outcomes of the sociopolitical movements throughout the Western world in the 1960s, and they were strikingly present during the years of my doctoral study at the University of Illinois at Urbana-Champaign from 1984 to 1991.

I suspect most readers will latch on to the statistics that Kaiser cites in his first paragraph about the startling drop in the number of history majors during the years of his career.  They are telling but, to me, a fact no less significant is this one:  as a result of the American Historical Association’s emphasis on gender issues in various forms and the political activities of the Left and the Right in the United States, we have, according to Kaiser, “practically no serious studies of US political and diplomatic history since 1980 or so today.”

Those forty years, from 1980 to 2020, unaccounted for and unstudied, amount to the development of the world we know:  the fall of the Soviet Union; the attempted unification of Europe; the development of machine culture for personal and political uses; the decline in the use of large military forces to achieve solutions to problems; the rise of terrorism as a means of compelling change.

The British historian J.M. Roberts had already noticed the movement away from broad studies of history toward smaller, more specialized works in the 1980 introduction to his one-volume Penguin History of the World.  He was jovial about the trend, saying that historians ought to be allowed to write on subjects that interested them.

There’s merit in Roberts’ view, of course, but the consequence of not teaching the survey courses, and not writing the broader books, such as Jill Lepore’s These Truths, about American history, or Mary Beard’s SPQR, about Roman history, is that we have raised a generation of fully-adult human beings who know nothing of the past out of which they’ve come.

They do not know, for instance, that the Soviet Union constructed the Berlin Wall not to defend against attacks from the West that never came, but to keep East Berliners from escaping the city.  In American race relations, many do not know about the Tulsa massacre of 1921, or the Watts riots of 1965, or the civil unrest in Philadelphia in the 1980s.  They may have heard about the protests after Rodney King’s arrest in Los Angeles in 1992, and it is certainly possible that they have seen the film of Michael Brown’s arrest in Ferguson, Missouri, Freddie Gray’s death Baltimore, and George Floyd’s death in Minneapolis.  Yet without being able to fit those events into a larger historical context (and being taught what that larger historical context is),  the narrow, specialized polemics of contemporary historians in the classroom amount to little more than the indoctrination of students into a belief system whose roots they do not know, and the potential that the present moment of civil unrest has for moving society forward will be completely lost.

What has happened on the streets of Minneapolis and New York City and Los Angeles and Seattle–the deaths, the protests, the riots–is deeply tragic, all the more so because the loss of lives and property could have been prevented.  But the events are not new.  Those who know our history have seen them before, many times.  And, sad to say, the responses to George Floyd’s death aren’t exactly new, either.  We had dialogues among ourselves after the Watts riots, too, and calls to disband the police.  Federal funds poured into LA for months afterward back then, as they did in Missouri after Michael Brown’s death and in Baltimore after Freddie Gray’s death.  The occupiers of the Autonomous Zone in Seattle, although that area has since been renamed in an effort to find a unifying purpose for being there, have nonetheless taken their playbook straight from the New Left’s occupation of administrative offices of colleges all over the country during the Vietnam War of the 1960s and 70s.  Those sit-ins sometimes lasted for weeks.  This one may last for months.  The Seattle mayor, herself a product of the New Left, when asked how long she expects the occupiers to stay, responded that we may all see “another summer of love” (1967).  She was being ironic, but she wasn’t kidding.

We’ve had utopian communities by the dozen set up here and there in the United States since the nineteenth century.  All of them faded because they couldn’t support themselves, either materially or philosophically.  Enthusiasm for the ideals waned; few joined the cause after the first wave of excitement.  The same fate will probably occur to the squatters on Capitol Hill. But if it does not, if they are able to establish and sustain themselves as a separate entity, that development will not be new, either:  the Republic of Texas existed within the contiguous United States between 1836 and 1846; and the territory of the Louisiana Purchase existed as the possession of Spain and France for 150 years before that.

The most interesting question to me is, where (and to whom) will the federal money go this time?  History is the record of the events and the artifacts which shape our lives.  American history shows that millions upon millions of dollars have been spent not only to arm police departments but also to rebuild neighborhoods and businesses after they’ve been torn down in civil unrest.  Yet, after all those millions spent, after a century of repeatedly repairing such damage whenever it occurs, poverty largely remains in many of those areas.  So does the distrust we have for each other.  Why is this so?  The answer likely does not involve only money.  If it did, we would have solved many of our problems years ago.  The answer, however complicated it may be, is more likely to be found in a broader, deeper understanding of our country’s history, an understanding that we have willfully shunned today, at the very moment we need it most.



Darkness Four Years On

Twitter is celebrating the four year anniversary of Batman vs. Superman: Dawn of Justice today.  I will celebrate it, as well, by pointing you back to the review of the movie and its context which I wrote upon the movie’s release in 2016.

The only additions I would make to my original remarks are that the Extended Edition of  Batman vs. Superman is a far better, more coherent film than the theatrical release, particularly in that it shows the depth of Lex Luthor’s plotting and evil; and that Gal Gadot’s Wonder Woman, introduced here and released the following year, was and still is a complete delight to watch.


Charles Portis

Charles Portis, the author of the truly wonderful novel, True Grit, has died at the age of eighty-six.  Twitter has a gallery of tributes, all of which are worth reading, and some of which will point you to his other fiction.

True Grit begat rare children: two superior movies: one in 1969 with John Wayne and Kim Darby; the other, in 2010, with Jeff Bridges and Hailee Steinfeld.  The original source material, however, is unmatched, and will always stand on its own as an example of how prose can be tough, direct, and lyrical at the same time.


The Old Man and the Sky

No one is quite certain what Martin Scorsese meant when he said a couple of weeks back, referring to the experience of seeing a Marvel film, “It’s not cinema.”  That is because no one is quite certain what Scorsese’s definition of “cinema” is.  At first glance, it may have been easy to take his remark as simply a gratuitous swipe at the world’s most popular movies in advance of the release of his own, latest film, The Irishman; and that’s the way I took it.  Robert Downey, Jr. handled Scorsese’s comment with grace on The Howard Stern Show, mostly by sidestepping Scorsese’s words entirely, and reminding all of us that the 76-year-old Queens native is, arguably, America’s greatest living film director.

On the other hand, Scorsese’s follow-up comments didn’t clarify his definition of cinema much at all, and the status most of us grant him as America’s finest living director may be challenged by some.  (Coppola–a friend of Scorsese’s–and Spielberg come to mind.)  In any case, only the most ardent Scorsese fan would be willing to let his remarks go by unquestioned, and thereby lose an opportunity to express an idea of what cinema may be.

To begin with, let me disagree with the notion that the Marvel films from 2008’s Iron Man through 2019’s Avengers: Endgame are not cinema.  They most assuredly are.  The films tell a story, a complicated one, over twenty-two separate installments, with hints of characters and events to come spread out all along the way.  Taken together, the Marvel films represent quite an accomplishment in writing, acting, and special effects, a far greater accomplishment than, say, the old Saturday-morning serials were, or even the Marvel films’ immediate ancestors, the Star Wars films.

If one objects by saying, “But the Marvel characters aren’t real,” I would agree; they’re not–at least not when they’re flying around or leaping around as superheroes.  But they’re only superheroes half the time.  The rest of the time, they’re fictional people, wrestling with the same problems we all do–illness, infirmity, anxiety,  lost love, guilt, and the heaviest possible sense of responsibility toward the extraordinary powers they’ve been given or have created for themselves.  They deal with these problems within a moral framework clearly set forth for us in Captain America: Civil War.  Do men and women with such remarkable abilities have the inherent freedom to act unilaterally, or ought they subject their talents to the service of the state?  This particular Marvel film is, in my view, one of the weaker ones of the series because it does not answer the question just posed, or even hint at a direction from which an answer might come; yet, it was not lost upon me as I watched it in the theater that Civil War frames for us the very real and very fierce struggle this country is now having between those who favor individual liberties and those who favor socialism, and the rule of the state.  If that is not a relevant topic for the cinema, I do not know what else would be.

Scorsese objects to the Marvel movies in part because they turn the theater into a kind of amusement park.  Perhaps the theaters in Queens are like that, but every Marvel film I’ve ever attended has been watched by both children and adults, all of whom have been well-behaved.  But if some theaters are amusement parks for the run of a Marvel film, what of it?  Scorsese knows that spectacle has been at the very heart of cinema since 1902’s A Trip to the Moon by Georges Milies and the later silent films of Cecil B. DeMille, including that director’s first crack at The Ten Commandments (1923).  Indeed, I would claim that to experience spectacle is why any of us go to movies in the first place.  A film like Metropolis (1927) or The Seventh Seal (1950) may offer us something more–exploration of an idea, or a glimpse at how people in the past may have behaved–but the desire for spectacle is the reason we go.  Theaters have always been, in one way or another, amusement parks.

The notion of spectacle at the heart of cinema may be distressing to a man of Scorsese’s talent and aims, but it need not be.  By spectacle, I do not mean the ancient bread-and-circuses, keep-the-people-amused exhibitions of lions and slaves in the Coliseum which developed (and doubtless scarred) the Roman psyche for hundreds of years.  I mean, rather, spectacle as part of the larger human purpose of play.  Human beings, both children and adults, must play.  As the historians Johan Huizinga and Phillipe Aries have traced out the behavior for us, play is essential for creativity, and the tools of the filmmaker are the tools of play.  For a long time, humans in Western culture did not play.  Children were treated as little adults, and societies were the worse for it.  But we have allowed children to play for the last five hundred years or so, and the general result has been an explosion for the better in the expression of the human imagination, and the solving of problems.

I might, by the by, suggest that these last remarks constitute a response to the judgments of filmmaker Terry Gilliam, who objects to the modern tendency of movies to offer viewers solutions to our problems and comfort to our souls.  He would prefer that movies simply ask the best questions possible, and leave solutions and comfort out of it.  Based on my own experience, Gilliam’s objections are an academic’s response.  Academics prize above all the asking of good questions, because if one asks a good question, she’s part way to finding a good answer.  For various reasons, however, some having to do with the desire to maintain one’s employment, others having to do with the solutions to problems being difficult, the asking of questions has become an end in itself, to the great detriment of academia and Western society.  A question, properly formed, is a means to an end.  It is not higher or better than the end which is sought.  We may ask a question in wonder, of course; but we ask a question mostly to push ourselves toward a solution to our problems.  Had humans contented ourselves with just asking questions without applying solutions, we’d still be living the lives we had five centuries ago, and our children would still be little adults.

In a way, I’m deeply glad for the Marvel movies.  They have demonstrated, once and for all, that a genre film–or a set of them–can be hugely successful.  We forget, these days, just how difficult it was for the genre of science fiction to get a foothold within the popular imagination.  The 1950s are dotted with classics:  The Day the Earth Stood Still (1953); Forbidden Planet (1956); and The Incredible Shrinking Man (1957); but, even so, one still can’t help hearing Patricia Neal giggle all the way through production of The Day; and if a viewer winces at some of the dialogue in Forbidden Planet, who could blame him?  And who has not found it just a little hard to suspend his disbelief at the idea of a shrinking man?  Scorsese has built an entire career out of making genre films (with occasional brilliant forays outside it, such as The Last Temptation of Christ in 1988), so perhaps some of his discontent with the Marvel films has to do with a narrowing of what a genre film can be. I hesitate to call any of Scorsese’s movies–Mean StreetsTaxi DriverGoodfellasThe Departedfilm noir.  They aren’t filmed that way, and an attempt to compare Scorsese’s films to the classics of film noir (like Out of the Past or Detour or Double Indemnity) simply wouldn’t work.  But the gangster film is a genre of its own, and Scorsese is a master of it.

I have to ask, though, are any of Scorsese’s movies “cinema” in the sense that he means it?  His movies deal with crime and punishment, retribution, and guilt, but I gotta tell you, I’m not thinking about any of those themes at the end of his movies the way I am thinking about the guilt Michael Corleone is feeling as he sits alone in the boathouse at the end of The Godfather II (1974).  That isolation, that despair, is not something any of us should envy or wish upon another, and it is a fitting punishment for Michael’s destruction of the five families and the murder of his own brother.  When I think of The Godfather II, I think of it as the last great film of the 1950s, the last of the film noir, and the last of the great studio films in the old Hollywood style.  By comparison, Scorsese’s films are slick, often gritty and involving, but ultimately trapped within their genre.  They are gangster films, but little more.  Watching them, I imbibe the sense of a bookish little boy from Queens getting his multi-million dollar revenge on all the tough guys who pushed him around in school.  He wanted to be like them, but couldn’t.  The best he could do was watch them, and mimic them.  To his credit, Scorsese is one of the best mimes around, but, sadly, to find a full and complete work of his art, I often have to go outside it, to a film that doesn’t explore the New York world he knew so well, to a film that doesn’t use the cinematic shorthand his genre audiences carry with them into the theater.  I have to go to The Last Temptation of Christ or The Age of Innocence (1993).

If by cinema Scorsese means, in part, a shared film experience, even he knows we’re far past that day, now.  For most us, watching a movie in a theater is a solitary experience, even if we are with someone, or with a group of friends.  The last cinematic experience I had in Scorsese’s sense was watching American Hustle (2013) with a large group of strangers at a cineplex.  None of us knew what to expect; the film had barely been advertised in the paper or on TV.  But as we watched, every single one of us was delighted by the blend of comedy, drama, farce, satire, and philosophy we were watching.  The Abscam scandal happened during my high school years in the 1970s, so I was familiar with the actual events when they occurred, and the irony of using con artists to catch con artists was not lost on me, even back then.  But to have it brought back so forcefully, with such sympathy for the principals, despite their folly and pain, was quite an experience.  At the end of the film, an amazing thing happened:  we stayed in our seats and talked to each other about what we had seen.  Some of us were deeply impressed by the script; others by the period accuracy of the costuming and scenery; still others mentioned the ethical dilemmas the whole affair raised.  We left, having resolved nothing, but we knew we’d been supremely entertained and even enlightened about how resilient human beings can be even under the most trying of circumstances.

That kind of experience is rare, and it will grow rarer still as video streaming continues to increase in both quality and its number of subscribers.  Some of the best movies I’ve ever seen–Ran (1986), The Seventh Seal (1950), Rust and Bone (2012), Dark City (1998), and Sunrise (1927) I saw alone, and at home, and I am the better for it.  There is no substitute for the training of one’s own mind upon a film without the interference of others or the distractions of a strange environment.  We find the communal experience helpful at times, but if it is dying away at the expense of films with mass popularity, that does not mean we are witnessing the death of film itself.

If Scorsese means by cinema some kind of shared communal experience, we may be losing that, and we may be losing as well the authority of the director as part of that experience, too.  The age of the director began in the 1960s in Europe, and it has lasted until the present day.  If this age is fading, it wouldn’t be the first time power has shifted in cinema or in Hollywood.  By the time the late 1950s rolled around, DeMille, who had ruled Hollywood spectacle for forty years, had become an old, cantankerous man.  One of the young lions of the Directors Guild finally stood up to him during a meeting one day:  “Mr. DeMille,” he said, “you’re a great director and you’ve made some great films, but we don’t like your politics.”  Such sentiments signaled a long, leftward shift in the politics of movie making, but nothing lasts forever.

If Scorsese is lamenting, as I think he is, the loss of films that explore how we live and how we should live, then Hollywood has only itself to blame.  The shared crucible of World War II gave a lot of writers and directors and actors the moral strength to write a lot of socially-conscious and morally-persuasive films like the The Best Years of Our Lives (1946), Gentleman’s Agreement (1947), Crossfire (1947), and The Blackboard Jungle (1955).  But that same Hollywood was, at the same time, covering up crime, drug addiction, and even the sexual orientation of some of its major stars and had been doing it for years.  Today, we’ve witnessed the spectacle of Hollywood turning on itself, as it did in the McCarthy Era, accusing each other of crimes–sometimes justly, sometimes not.  While absolutely no one can claim personal perfection, such activity has to be eroding the moral base upon which every piece of art has to stand.  I wonder if, in flocking to see the Marvel films in such numbers and letting those costumed, fully-fictional heroes elaborate the arguments of our time for us, we have not validated the message that the Age of the Moral Film is now over, as well?  Is it not possible that, without a shared moral base, Hollywood has lost the moral authority it takes to tell the serious cinematic tale Scorsese would like to see?  The last such film I saw was Three Billboards Outside Ebbing, Missouri (2017), a good show, with complex characters, but even so fine an actress as Frances McDormand couldn’t quite make me believe that firebombing a police station was fully justified by the rape and murder of her daughter, or that going after a man who did not do that crime would somehow ease her pain.  As problematic as the film is (and it intends to raise problems for its audience), it, too, is a rare film these days, and it might become even more so.

Perhaps Scorsese is just a cranky old man for making all of us think of these things.  Or perhaps he truly does think valuable elements of the movie-going experience are being lost and he wanted to warn us.  Either way, an old man’s gotta be an old man; the sea’s gotta be the sea; the sky is bright enough some days, ya gotta shake your fist at it.  We do lose things of value now and again, without ever realizing we lost them.  We do lose our moral compass from time to time, even the best of us.  To cry out against the vulgarity–that is, the commonness–of the age in which one lives is something we’ve been doing for a very long time.  The cry is not always evidence of a sour spirit, but is instead a shout in the direction where something better resides, or used to.  That’s a valuable service for anyone of any age to perform for us.  The reminder of the glory of what was beckons us to see it again, and it is often the first step in creating what is new and fresh and vital to us now.


In A Different Voice

In my last post, I mentioned the versatility of Tom Hanks.  It’s worth pointing out that that versatility is often subtly expressed, and might be missed even by those who are looking for it.  A case in point is Hanks’ performance as Capt. John Miller in Saving Private Ryan.  Among the millions who have seen that film, there’s a large subset of thousands of viewers who cannot get past the accurate but appalling depiction of the D-Day landing.

Within those opening scenes, Capt. Miller’s commands to his men before the landing and after are expressed in crisp, direct language.  Even amid the whizzing of bullets, the spray of blood, the boom of heavy guns, and the screams of the dying, his orders are impossible to misunderstand.

It was not always so.  In an early draft of the screenplay, writer Robert Rodat made Capt. Miller more chummy and familiar with his men than in the final version of the story we see onscreen.  Later drafts pare away the briefly-lighthearted conversations Miller has with them on the Higgins boat and on the beach, and leave us with the focused, clear-headed captain who never forgets the objective of the D-Day landing.

Miller is so focused, in fact, that his men think he’s a machine, assembled out of various body parts at Officer Candidate School.  Some of us might be inclined to agree with them, if we remember the exchange Miller has with the Colonel (Dennis Farina) on Omaha Beach three days after the landing.  Again, in the early draft, the language is not what we expect.  Miller is simply asked to “Report.”  He replies that sector four is now secure, but with casualties, courtesy of the German Wehrmacht.  “They just didn’t want to give up those one-fifty-fives, sir.”  Miller’s words here are changed in the final draft to “eighty-eights,” giving us a punchier two-syllable summation of why the Germans died.

But Rodat’s final draft of Miller’s report goes further.  “We took out gun emplacements here and here and here,” Miller says, pointing to the Colonel’s map, “but the whole area turned into a mixed, high-density field–mines all over the place, including small ones our detectors can’t pick up–”  and suddenly, Miller appears to us in a different form, speaking in a different voice.  He is, with the Colonel, not just a captain, but a battle analyst and a tactician.  He’s the same man–the one whose hand shakes with fear–but we see and hear that Miller’s men have an absolutely correct intuition about him:  there’s more to him–far more–than appears on the surface.  In a different element, a different set of circumstances, any of us might behave far differently, speak in a different tongue, live differently, from the way we do now.

What that scene teaches us as writers is that it is necessary to allow the characters we create to speak in a different voice and act in a different way when the situation calls for such change.  We all want to create consistent characters–people who sound like themselves from one speech to the next, and we wince when they don’t.  That’s what Mark Twain was complaining about in “Fenimore Cooper’s Literary Offenses,” and his complaints against Cooper (a noteworthy novelist) have some merit, but we all speak in different voices each day, and the characters in our writing should, too.

Something else is revealed in the early draft of Saving Private Ryan that’s worth mentioning.  Capt. Miller is, at the time of his conversation with the colonel, already a Congressional Medal of Honor winner.  He uses that status to question the orders he’s just been given to find James Ryan and bring him home:

“Respectfully, sir, sending men all the way up to Ramelle to save one private doesn’t make a fucking, goddamned bit of sense.”

We don’t know what Miller had done before in combat to merit a Medal of Honor, but Rodat crucially and wisely drops this exchange and all mention of Miller’s prior exploits, so that the mission he’s just been given will take center stage.  In the final draft, Miller, as we know, does object to the mission, but he couches his objections in ironic language as he walks with his men through the fields of France in the rain:  “I’d say, ‘Why, yes, sir, that’s a fine use of resources, and I’m sure saving Private James Ryan is an objective of great military importance.”  They all smile grimly at Miller’s meaning.  By cutting the early objection and bringing it up subtly here, Rodat has cleared the way for the major theme that dominates the last half of the picture:  the theme that may be expressed by the words, “Earn this.”  By deciding to stay on the bridge with Ryan and his fellow soldiers, Miller and his men hope to earn the right to go back home. Those words–“Earn this“–build up in scene after scene, and by the time the dying Miller whispers them in Ryan’s ear, they carry a lifetime responsibility, a terrible weight, that none of us could bear, whether we were actually there on that bridge in 1944, or simply watching the scene being played out in a movie theater in 1998.

Yet, how could any of us, watching Miller’s final moments, do anything other than try to carry out Miller’s last command?  I have no idea what Miller did before D-Day to earn the highest award our nation can bestow upon a soldier.  But because of Rodat’s brilliant decision, I know beyond all argument that what he did at Ramelle with his men merits that award, and that all of us have to, in some way, earn the lives we’ve been given because of their sacrifice and the sacrifice of the thousands of real soldiers just like them.


A Christmas Present From Tom Hanks

The Internet Movie Database reports that Tom Hanks’s adaptation of Paulette Jiles’ novel, News of the World will be released on December 25, 2020.  My review of Jiles’ novel is here.  It was a fine book, stirring echoes within me of Charles Portis’s masterpiece, True Grit and both movies that came from it, True Grit (the original and the remake) and Rooster Cogburn.

Hanks is a fine actor, capable of portraying many character types; yet, Jefferson Kyle Kidd will prove a challenge for even his formidable skills.  Kidd approaches Cogburn’s roughness, yet he also possesses a quiet bookishness that will startle those who are unprepared for it.  Given what I only recently learned about Kris Kristofferson’s career (Oxford, Rhodes Scholarship, love of Shakespeare, William Blake, and the English Romantic poets, Army service), though, I am quite prepared to be startled again.  Many, many people live remarkable lives, the depth and breadth of which we have no clue.  That was true in the 1840s, and it is true now.

Time slips away from us all.  Hanks is sixty-three now.  Kidd is seventy-one in the book–seventy-one in the nineteenth century, old indeed.  It will be interesting to see how old Hanks allows himself to be onscreen, how old he is willing to let Kidd be.

Read my review for an idea of how big Jiles’ slender book really is, and read the novel itself for its well-crafted prose.  Then, we shall see on Christmas Day next year if Hanks’s latest project can help us see or feel things we had not sensed before.


Toni Morrison

I note with sadness the startling news that Toni Morrison, author of Sula, Song of Solomon, and Beloved, died last night at the age of eighty-eight.

I’m so stunned by her passing that I can hardly write.  She was, in my opinion, the finest American writer living, surpassing anyone else that you could think of.  The three novels I’ve mentioned constitute, in my view, her best work.  Sula is a short novel, but it is a masterpiece, and deserves to be studied and absorbed by every writer who wishes to see an example of a story in which every word does useful work.  It is brilliant and, for me, life-changing.  If you have it, pull it off your shelves, and re-read it tonight.


That Leap, and the Leap to Come

The check I put in the mail this morning to pay my monthly credit card bill bears the tiniest trace of a mistake I made.  On the date line of the check, I almost did not write, “July 20, 2019”; I almost wrote, “July 20, 1969.”

The same impulse, to write or speak that earlier date, has come over me several times on this day in the fifty years since that remarkable summer night, a measure of how deeply July 20, 1969 was embedded in the consciousness of everyone who watched the moon landing, wherever they were.  There are some dates–December 7, 1941, November 22, 1963, April 4, 1968, September 11, 2001–time will not allow us to forget.

To be honest, most of the images I remember from that night (it was a Sunday) are fading from my memory.  Even watching a YouTube video of the television coverage doesn’t bring as many of them back as I would like.  I remember hoping Neil Armstrong didn’t slip as he descended the ladder of the Apollo 11 lander; and I remember thinking most strongly, as my sisters and my parents and I watched Armstrong and Aldrin bounce on the surface of that other world, “Where do we go from here?”

That was the question I kept asking myself that night in various forms.  What will this night lead to? What kind of future will there be?  The television series Star Trek had been off the air, cancelled, for about four months when Armstrong and Aldrin touched down, with Michael Collins circling above, but Gene Roddenberry’s vision of the future seemed a whole lot closer that night than it had been in about three years.  Roddenberry’s fictional future was predicated on the development of some kind of renewable, universal power source on Earth, one that compelled humanity to stop delving into the Earth itself for the limited supplies of oil and petrochemicals, and freed us to provide basic necessities for everyone on the planet, while also allowing us to explore the region of space around us.

We have not yet found or developed that limitless, universal source of energy, unless you count the sun itself, as many do.  Even there, however, difficulties remain.  Solar power works best only in those regions of the world where sunlight is most abundant.  Otherwise, it’s about as reliable a service tool as one’s satellite dish–good on most days, annoyingly bad on others.  The search continues, mostly involving some form of atomic power, but no form of atomic power yet developed is free from the dangers of radiation.  We may find, centuries from now, that our greatest accomplishment as the human species on this planet was the development of a Dyson sphere, and the harnessing of the energy from a nearby star.  That task would be monumental; it would take many, many lifetimes to achieve; but it would preserve us as a species, and allow us to go where we will, in this solar system, in this galaxy, and perhaps beyond.

The giant leap and the baby steps Armstrong took on the moon that night seemed to promise the beginnings of such a future for all of us.  Yes, he planted an American flag in the lunar soil, marking the end of the race to the moon, a contest the Americans won; but even in 1969, even in Mission Control in Houston, where I lived, we knew that that night was more significant than any competition.  E.B. White, commenting on the moon landing in The New Yorker, knew it, too.  “Like every great river and every great sea,” he wrote, “the moon belongs to none and belongs to all.”  The landing there represents the finest efforts of both men and women from the earliest days of the twentieth century, in mapping the territory of the moon, calculating the escape velocity, flight path, and orbit needed to survive, risking one’s life to see if humanity could live beyond Earth’s atmosphere, and developing the technology that allowed astronauts to re-enter Earth’s atmosphere safely and come home.

Measured by the accomplishments of Apollo 11, the years since have seemed just a touch disappointing.  Humans haven’t approached the moon since Apollo 13, and there are some good reasons why.  The Americans won the race to the moon and there was no reason to push the contest further.  The disastrous explosion aboard Apollo 13 scared everyone involved with the space program and reminded them just how vulnerable human beings are when they go beyond the kindly influence of this planet.  Yet, the generations alive fifty years ago all knew that giant leaps are always followed by baby steps, as one recovers one’s balance.  Apollo 13, the Challenger explosion, the Columbia tragedy–all of these events impressed indelibly upon us the enormous risks of human space flight, and a great many of us are more at ease with taking our time in developing deep space capabilities than we used to be.  I hoped to live long enough to see humans land on Mars.  It’s very doubtful now that I will, despite the cheerleading we are hearing from private rocket companies.  I am, as the saying goes, ok with not seeing it.

There are many problems to tackle and to solve:  developing onboard systems and backups that are foolproof; creating shielding to protect the crew from radiation that will kill them; providing food and other necessities to last for the six-month journey there and the six months back; developing a habitat that will foster good working relationships and good health among those aboard for the long journey.  Each of these elements is critical.  A failure in any one of them would mean death for the crew; there will be no rescue possible, despite what we have read in novels and seen in movies.  In my opinion, the radiation between here and there is the biggest risk.  To survive it, a crew might even have to be genetically altered before the mission begins, which means we’ve got to learn a whole lot more about DNA than we presently know, and overcome the inevitable moral, ethical, and political objections to altering that crew.  The solutions, and the battles over them, are years in the distance.

Still, because of what happened fifty years ago, I am optimistic about those of us on planet Earth, and hopeful of a good future.  Given the risks, why should we press forward with the human exploration of space?  We should, because we can, and there are benefits to doing so.  Comets and asteroids exist close enough to us for us to go to them and mine them for water and minerals that are not in abundance on Earth.  You want a limitless supply of energy?  The rocks around us could supply all we will ever need.  What we have to do is get there.

There is an ongoing debate about whether the moon would be practical as a base from which to go to Mars.  It probably would be, but the voyage to Mars is going to be expensive no matter what choices we make.  What we learn in getting there will be worth the cost, even if that cost involves the loss of human life.  I will not forget Gus Grissom, Ed White, and Roger Chaffee on this day we celebrate the Moon landing, and those who may give up their lives in getting us to Mars will not be forgotten, either.  The main reason for going there is the survival of human life itself.  It may be possible to live there a long time, over millennia, as our Sun burns itself out and life on Earth dies.

Beyond Mars, who can say where we will go?  The distances to the stars and the planets around them are immense–trillions and trillions of miles.  We’ll be lucky to explore most of this solar system and as far as we can see with our telescopes, and that’s about it.  As far as venturing out is concerned, we are destined, I think, to be mere dreamers on the shore, dipping a single toe into a great ocean whose other side we cannot see.  Many of us would like to leap into that ocean and swim out as far as we can, but  we haven’t learned how to walk yet, much less swim.  Armstrong, Aldrin, and Collins showed us the first steps in the process.  The next leap, though, will take some time.  It will involve progress in physics, in genetics, and in artificial intelligence.  It will involve the realization that, although human minds need to go into space, human bodies may not have to.  Whether we send out machines tethered to human brains or some combination of human and artificial life yet to be created, reducing the fundamental perils of living in deep space will be just as important as developing the propulsion systems to get us where we want to go.



Does the End Justify the Effort in Fiction?

Nicolaj Coster-Waldau defended the work of David Benioff and D.B. Weiss over the writing of the final episode of Game of Thrones the other day in a HuffPost interview noticed by Vanity Fair.  “Everybody worked their asses off,” he said, to produce the best finale they could.

I am sure they did work hard; I have no doubt of it.  I do have doubts, however, about whether the ending was as hard-hitting and as epic as the rest of the series was.

If the HBO documentary, The Last Watch, is to be believed, in the first cast read-through of season eight, Kit Harrington was reading cold; unlike his castmates, he hadn’t read any of the scripts he had been sent for the season.  When he came to the last scene of the last episode, wherein Jon Snow kills Dany–a necessary, evil act–he was shocked, and pained.  Had the final shooting script been written that way, many disappointed fans would have gotten the shattering, Westeros-cracking, series ending they were hoping for, an ending that would have measured up to the impact of the death scene of Eddard Stark in season one and the death of his son, Robb Stark, at Walder Frey’s Red Wedding.

But we didn’t get that ending.  Instead, we got Dany’s death, which was beautifully done, followed by fifteen minutes of some of the silliest plot resolution ever seen on a great show:  Brandon Stark, who had made it perfectly clear that he had moved beyond the material desire of a terrestrial kingship, is suddenly appointed king; Tyrion, who had just finished betraying his Queen, retains his position of Hand; the assassin, Arya Stark, decides she wants to be Christopher Columbus for the rest of her life; and Jon Snow, whom we are repeatedly told over the final seven episodes of the series is actually Aegon Targaryen, heir to the Iron Throne, goes back into the Northern forest with the Wildlings, without ever once asserting his claim.

The only resolution of character that made full sense was that of Sansa Stark, who gets to be what she always wanted to be, a Queen, albeit Queen in the North, and not of Westeros.  Her crowning was a satisfying moment, and yet, I wish we had been shown it earlier, so that Danerys Targaryen’s death would be the last image the series offered us.

There is a distinction, of course, between a television series as viewers see it each week and the shooting script of each episode.  Scenes are not shot in the order viewers see them.  They are shot in the order of ease of production.  But what I am suggesting–to actually end the series with Danerys’ death–would have made for a truly remarkable finale.

That ending would have required a different set up for the resolution of other character plot lines, but difficult as the challenge would have been, it would have saved viewers the task of digesting all those resolutions in one big lump at the end of the show.  I would even suggest that, in doing it the way it was done–with Arya sailing off and Jon riding away off into the trees–Benioff and Weiss did what George RR Martin did not want done, and that is to provide a happy ending for the entire tale.  To be honest, nobody needed a glimpse into the future of these characters.  They would have lived in our imaginations anyway, as The Lone Ranger (whose death is nowhere recorded) lives in mine.

What was missed in the Game of Thrones finale was an opportunity to create an ending that fully justified the viewers’ (and the actors’) efforts to reach it.  Contrary to popular opinion, two of the episodes of season eight, “The Long Night,” about the attack of White Walkers on Winterfell, and “The Bells,” about Danerys’ siege of Kings’ Landing, were among the very best the series has ever done.  But a series, just like a novel or a movie, has to drive to a payoff that is worth the effort to watch.  Raiders of the Lost Ark had such a payoff; so did the finale of the television series, Angel, and, most recently, Deadwood: The Movie, which brought that vision of the Old West and those characters to an end that was simply perfect.

So, I put it to you:  does the ending of your tale justify our efforts to get there?  I’m not asking if all your characters are going to live happily ever after.  I’m asking, have you written a piece of fiction that is worth our time from its beginning to its end?