Resident Theologian
About the Blog
The hatred of theology
In the latest issue of The Point, Jon Baskin writes on behalf of the magazine's editors about what he calls "the hatred of literature." By this team he means the attitude—apparently dominant in English departments a couple decades ago and imbibed by graduate students across the land—that the study of literature exists not to appreciate its multifarious goodnesses and beauties, rooted in love for the object of study, but instead to uncover, unmask, and indict the social, moral, and political problems belonging to its conditions of production. The novel or poem is therefore not an object at all, that is to say, an end, but a means to a larger, political end; criticism thus becomes an instrument of political advocacy. The work of literary art plays no role in calling me or my convictions into question. Rather, the critic measures the work by the correctness of its views or its capacity to activate social change (for the better, that is, more or less in line with my priors), and judges its quality accordingly.
Baskin labels this approach the "hatred" of literature for two reasons. On the one hand, it does not treat literature as an end (however proximate) in itself, but only as a sort of weapon to advance or stymie the cause—whatever that may be. On the other hand, and more important, it quite literally does not arise from what usually stands as the origin story for so many students and teachers of literature: love. Love for the thing itself, for its own sake, just because. A love that does not demand agreement or relevance or revolutionary potential or the "right" politics, but only that ephemeral experience that is the root of all art: an encounter with that which outstrips the mundane, calling to the self from beyond the self. That old word "beauty" is one of the ways we try to capture such encounters.
Reading Baskin as an academically trained theologian, it made me wonder: Is there a similar phenomenon in academic theology? Does one find—or, in recent decades, could one find—in the academy "the hatred of theology"?
I think the answer is yes, in at least six ways.
First, there is a style of doing theology formally parallel to the "New Historicism." Namely, theology reduced to its sociopolitical function. What does theologoumenon X or Y accomplish with respect to certain desired political ends? There's plenty of that around, past and present.
Second, there is what some, at least in the U.K. a few decades ago, used to call "doctrinal criticism." This comprised the study of traditional doctrines from church history and the subjection of them to "critique" under the conditions and presuppositions of modernity. In other words: What is "modern man" permitted to believe, and what of the Christian dogmatic heritage must be revised, and in what ways, in order to fall in line with the Enlightenment and its heirs?
Third, in the 1960s, '70s, and '80s, there was a kind of obsessive-compulsive anxiety about methodology that, as the old saw goes, never got around to actually talking about God, but only talked about talking about God. This, too, served as an avoidance strategy for academic theology.
Fourth, there is a mode of theology similar to the first example above that is nonetheless subtly different. It isn't so much about theology being merely a means to a foreordained end. But its utility as a source of or exercise in knowledge is indexed to its practical relevance. So that, e.g., the doctrine of the Trinity must have direct and obvious consequences for human social life—or else, why are we talking about it in the first place?
Fifth, a similarly practice-oriented theology is less interested in the potentially transformative implications of otherwise esoteric doctrines like the Trinity for human life. Instead, it works the other way: such doctrines are ruled out of court in advance. Only certain doctrines and topics are intrinsically practical; it is those that theology ought to attend to. Often this approach is coordinated to, or a function of, a laser-like focus on the church's life and the conduct of its ordinary members. Of what benefit is this doctrine to the average Christian? is the pressing question that filters the worthy from the unworthy loci.
Sixth and last, much theology simply proceeds with little to no reference to God as such. It is identifiable as a kind of Christian discourse (it speaks, as it were, Christianese), but the subject matter, by any reasonable account, is not the God of Christian confession. Something else is thereby sought to "make" the discourse "theological," whether or not that effort succeeds.
I should say that this is a quick and dirty list, with considerable overlap between the different items and almost certainly other examples left off. And I should clarify the quirkiness of theology compared to literature, since the analogy is imperfect at key points.
First, the subject matter of literature is literary artifacts written by human beings. Whereas the subject matter for Christian theology God: alive, on the one hand, yet inaccessible to empirical investigation, on the other. Knowledge of God is mediated by that which is not God. Furthermore, the "love" of which Baskin writes is disanalogous in the extreme compared to the "love" that grounds and sustains theology. For this latter love is personal love, directed (ideally) in complete and utter devotion to that than which nothing greater can be conceived: the author and perfecter of our very souls. Nothing similar can be said of literature (or when it is, it is sad to see).
But this only highlights the oddity, even the tragedy, of loveless theology. To speak and write about God as if he is not the all-consuming fire of one's life—as if, indeed, his existence and attributes are a matter of polite speculation—is to repudiate theology itself. Why bother? One can at least understand the literary critic who "hates" literature in Baskin's sense. In the case of the theologian who "hates" theology, and by implication theology's Sache, it is wholly unintelligible.
Second, theology has a natural home, and it is not the university or even the seminary. It is the church. So there is a community that both houses and is the beneficiary of theology's labor. In that sense theologians and believers are right to expect theology to service the church, which does at some level mean a practical effect. (That is why there is a tradition in the church that understands theology to be a practical science and not a theoretical one; compare, for example, the Franciscans to the Thomists.)
Third, theology concerns not just any God but the God revealed in Jesus Christ, who calls all people, including theologians, to follow him. This entails, in summary form, loving God with one's whole self and loving one's neighbor as oneself. The upshot: theology touches on all of life, for it considers all things in relation to God; therefore theology would be incomplete without speaking to moral, social, and political matters. Even by implication, to speak of God is inevitably to speak of issues of great human import, since that same God, who created humanity, became human in Christ and lived an exemplary life to which all are called to conform. To do theology abstracting from these facts would be a failure of serious magnitude.
The trick, then, is to balance the theoretical and practical tasks of theology without denying one in favor of the other or rendering either synonymous to the other. Above all, though, theology must never be embarrassed to be itself. And to be itself, theology must speak of God, boldly and with unbreakable faith. So to speak of God, however, means one must love God, which is the beginning and end of theology. The theologian, it turns out, is one who loves God and thus, in a manner of speaking, loves theology too.
Baskin labels this approach the "hatred" of literature for two reasons. On the one hand, it does not treat literature as an end (however proximate) in itself, but only as a sort of weapon to advance or stymie the cause—whatever that may be. On the other hand, and more important, it quite literally does not arise from what usually stands as the origin story for so many students and teachers of literature: love. Love for the thing itself, for its own sake, just because. A love that does not demand agreement or relevance or revolutionary potential or the "right" politics, but only that ephemeral experience that is the root of all art: an encounter with that which outstrips the mundane, calling to the self from beyond the self. That old word "beauty" is one of the ways we try to capture such encounters.
Reading Baskin as an academically trained theologian, it made me wonder: Is there a similar phenomenon in academic theology? Does one find—or, in recent decades, could one find—in the academy "the hatred of theology"?
I think the answer is yes, in at least six ways.
First, there is a style of doing theology formally parallel to the "New Historicism." Namely, theology reduced to its sociopolitical function. What does theologoumenon X or Y accomplish with respect to certain desired political ends? There's plenty of that around, past and present.
Second, there is what some, at least in the U.K. a few decades ago, used to call "doctrinal criticism." This comprised the study of traditional doctrines from church history and the subjection of them to "critique" under the conditions and presuppositions of modernity. In other words: What is "modern man" permitted to believe, and what of the Christian dogmatic heritage must be revised, and in what ways, in order to fall in line with the Enlightenment and its heirs?
Third, in the 1960s, '70s, and '80s, there was a kind of obsessive-compulsive anxiety about methodology that, as the old saw goes, never got around to actually talking about God, but only talked about talking about God. This, too, served as an avoidance strategy for academic theology.
Fourth, there is a mode of theology similar to the first example above that is nonetheless subtly different. It isn't so much about theology being merely a means to a foreordained end. But its utility as a source of or exercise in knowledge is indexed to its practical relevance. So that, e.g., the doctrine of the Trinity must have direct and obvious consequences for human social life—or else, why are we talking about it in the first place?
Fifth, a similarly practice-oriented theology is less interested in the potentially transformative implications of otherwise esoteric doctrines like the Trinity for human life. Instead, it works the other way: such doctrines are ruled out of court in advance. Only certain doctrines and topics are intrinsically practical; it is those that theology ought to attend to. Often this approach is coordinated to, or a function of, a laser-like focus on the church's life and the conduct of its ordinary members. Of what benefit is this doctrine to the average Christian? is the pressing question that filters the worthy from the unworthy loci.
Sixth and last, much theology simply proceeds with little to no reference to God as such. It is identifiable as a kind of Christian discourse (it speaks, as it were, Christianese), but the subject matter, by any reasonable account, is not the God of Christian confession. Something else is thereby sought to "make" the discourse "theological," whether or not that effort succeeds.
I should say that this is a quick and dirty list, with considerable overlap between the different items and almost certainly other examples left off. And I should clarify the quirkiness of theology compared to literature, since the analogy is imperfect at key points.
First, the subject matter of literature is literary artifacts written by human beings. Whereas the subject matter for Christian theology God: alive, on the one hand, yet inaccessible to empirical investigation, on the other. Knowledge of God is mediated by that which is not God. Furthermore, the "love" of which Baskin writes is disanalogous in the extreme compared to the "love" that grounds and sustains theology. For this latter love is personal love, directed (ideally) in complete and utter devotion to that than which nothing greater can be conceived: the author and perfecter of our very souls. Nothing similar can be said of literature (or when it is, it is sad to see).
But this only highlights the oddity, even the tragedy, of loveless theology. To speak and write about God as if he is not the all-consuming fire of one's life—as if, indeed, his existence and attributes are a matter of polite speculation—is to repudiate theology itself. Why bother? One can at least understand the literary critic who "hates" literature in Baskin's sense. In the case of the theologian who "hates" theology, and by implication theology's Sache, it is wholly unintelligible.
Second, theology has a natural home, and it is not the university or even the seminary. It is the church. So there is a community that both houses and is the beneficiary of theology's labor. In that sense theologians and believers are right to expect theology to service the church, which does at some level mean a practical effect. (That is why there is a tradition in the church that understands theology to be a practical science and not a theoretical one; compare, for example, the Franciscans to the Thomists.)
Third, theology concerns not just any God but the God revealed in Jesus Christ, who calls all people, including theologians, to follow him. This entails, in summary form, loving God with one's whole self and loving one's neighbor as oneself. The upshot: theology touches on all of life, for it considers all things in relation to God; therefore theology would be incomplete without speaking to moral, social, and political matters. Even by implication, to speak of God is inevitably to speak of issues of great human import, since that same God, who created humanity, became human in Christ and lived an exemplary life to which all are called to conform. To do theology abstracting from these facts would be a failure of serious magnitude.
The trick, then, is to balance the theoretical and practical tasks of theology without denying one in favor of the other or rendering either synonymous to the other. Above all, though, theology must never be embarrassed to be itself. And to be itself, theology must speak of God, boldly and with unbreakable faith. So to speak of God, however, means one must love God, which is the beginning and end of theology. The theologian, it turns out, is one who loves God and thus, in a manner of speaking, loves theology too.
"Unique": absolute or relative?
Apropos of nothing, it's always bugged me that, grammatically speaking, the modifier "unique" is not supposed to be modified adverbially (as in "relatively" or "somewhat" or "nearly" unique). Instead, either "unique" is absolute or, by definition, it is simply not unique. I recall reading something by David Foster Wallace about this years ago.
Isn't it the case, though, that nothing is absolutely unique? Rather, anything is unique relative to some qualifier, property, activity, or question. Otherwise, it would follow that everything is unique—because nothing is itself but itself—or nothing is, with the exception of God, who alone (existing a se and in se and thus non est in genere) is actually unique in an absolute sense.
I understand the desire to want to mitigate popular usage of "unique" as a less powerful adjective than it ought to be; used colloquially, and always modified by synonyms of "partially," it comes to mean "pretty different," or "a stand-out among other, similar things."
But the notion of its being absolute, semantically or conceptually, makes no sense to me. I'm no linguist, however, and in any case DFW is not to be gainsaid. Happy to be corrected on this point by the more grammatically enlightened.
Isn't it the case, though, that nothing is absolutely unique? Rather, anything is unique relative to some qualifier, property, activity, or question. Otherwise, it would follow that everything is unique—because nothing is itself but itself—or nothing is, with the exception of God, who alone (existing a se and in se and thus non est in genere) is actually unique in an absolute sense.
I understand the desire to want to mitigate popular usage of "unique" as a less powerful adjective than it ought to be; used colloquially, and always modified by synonyms of "partially," it comes to mean "pretty different," or "a stand-out among other, similar things."
But the notion of its being absolute, semantically or conceptually, makes no sense to me. I'm no linguist, however, and in any case DFW is not to be gainsaid. Happy to be corrected on this point by the more grammatically enlightened.
An amendment to the amendment
If you couldn't tell, I've spent a good part of 2019 trying to figure out what to do with Twitter. I limited my time on it, I nixed tweeting, I cut out all but Saturdays, I basically exited for two months. Then a few weeks ago, after seeing friends at AAR in San Diego whom I had "met" via Twitter, I decided to amend my tech-wise policy and dip my toe back into the service. And once the semester I ended, I allowed myself to get back on a bit more while home for the Christmas break.
Following all that experimentation, I think I'm back to where I was last May. That is, at the macro level, the world would unquestionably be better off without Twitter in it, because Twitter as a system or structure is broken and unfixable. But at the micro level, the truth is that my experience on that otherwise diabolical website is almost uniformly positive. Aside from the "itch" that results from any social media participation—an itch that is not conducive to the life of the mind or of the soul—my time on Twitter is basically beneficial. I meet new friends, interact with old ones, and generally have fun talking theology, pop culture, and other such things. I avoid toxic profiles and bankrupt topics, and am not prone to tweet things that could get me into trouble.
So I think I'm going to return in full, with the usual prior disciplines intact (no app on the phone, for example) and one remaining ascetic caveat. I'm not going to sign on to Twitter, either to tweet or to read others, during work hours on weekdays. The best thing about my self-imposed exile was the way in which it freed up my mental energy and attention while reading or writing in my office, as opposed to dwelling on some ongoing thread or idea for a tweet.
So that's the amendment to the amendment. I'll check back in a month or two and share how things are going.
Oh, and happy new year!
Following all that experimentation, I think I'm back to where I was last May. That is, at the macro level, the world would unquestionably be better off without Twitter in it, because Twitter as a system or structure is broken and unfixable. But at the micro level, the truth is that my experience on that otherwise diabolical website is almost uniformly positive. Aside from the "itch" that results from any social media participation—an itch that is not conducive to the life of the mind or of the soul—my time on Twitter is basically beneficial. I meet new friends, interact with old ones, and generally have fun talking theology, pop culture, and other such things. I avoid toxic profiles and bankrupt topics, and am not prone to tweet things that could get me into trouble.
So I think I'm going to return in full, with the usual prior disciplines intact (no app on the phone, for example) and one remaining ascetic caveat. I'm not going to sign on to Twitter, either to tweet or to read others, during work hours on weekdays. The best thing about my self-imposed exile was the way in which it freed up my mental energy and attention while reading or writing in my office, as opposed to dwelling on some ongoing thread or idea for a tweet.
So that's the amendment to the amendment. I'll check back in a month or two and share how things are going.
Oh, and happy new year!
On Episode IX
Well, it happened. Abrams didn't even rise to his own best level. He capped off a 42-year cinematic saga with a stinker so bad that it sullies not only the new trilogy he helped to launch but his own reputation as a filmmaker.
I thought I'd avoid writing about the film, but instead of spending time on Twitter or Slack, let me just share my thoughts here.
What makes The Rise of Skywalker so bad? Well, there are multiple levels of badness involved.
[SPOILERS HEREON Y'ALL.]
First is the filmmaking itself. This was the most shocking thing about IX. I knew Abrams would go for nostalgia and servicing fandom. I figured he'd undermine VIII. I didn't know he would make such a straightforwardly bad movie, one alternately boring (the guy next to me on opening night fell asleep) and poorly told (my wife can't be the only one who found it difficult to follow).
The opening 30 minutes in particular move so fast, across so many worlds and plot points and characters old and new, with such flat-footed awkwardness, that it feels as if Abrams stepped in as director to replace another director who had already filmed all this. But that's not what happened. It's all him. Working for Disney, that trillion-dollar behemoth with all the money and time in the world to let Abrams make the best movie he possibly he could. And the result is shoddy beyond belief.
There are moments of elegance or grandeur. Rey's duel with Ren in his TIE fighter. The unexpected, graceful healing of the serpent beneath the sands. (Nice nod to The Mandalorian, that.) The ocean duel. The Sith coliseum of dead souls, the undead Emperor upheld by a black claw. Finn and Poe shooting up a Star Destroyer while running towards a low-angled tracking camera. The lightsaber "swap" from Rey to Ren (the one good and successful extension of a Rian Johnson idea). The shocking accidental death of Chewbacca at Rey's hands—
Whoops! I forgot, nobody dies in Star Wars.
And there's the rub. The problem is the script. It feels like it was written by committee in a succession of a dozen drafts. The result, at least until the final act, comes across like a series of videogame player quests. The doohickeys sought don't matter in themselves. They're just the next required token to level up and receive the next assignment.
The other problem with the script is its bottomless well of bad ideas. Hux is a spy! (Dead Hux. Dad Hux!) Leia retconned into a Jedi! Leia as Rey's true Jedi Master! A fleet of thousands of Star Destroyers, each as powerful as a Death Star—yet unable to move without being directed by a single frail antenna! (Dracula rules, as ever.)
Worst of all, Palpatine as Rey's grandfather and inexplicably alive after being thrown to his death by Vader in VI and the secret Blofeld-like master-puppeteer behind Snoke and the First Order. (Abrams: The first order? THE FINAL ORDER!) The opening words of the crawl read, "The dead speak!" A nice B-movie callback to the saga's origins. But also a foolish, self-parodying decision by a writer-director who quite literally has done nothing but make sequels and remakes for his entire career. The man does not know anything except what he watched as an adolescent. Sometimes he remixes it well. Here, he does not.
What Abrams needs is whatever happened to Lindelof while making The Leftovers. A kind of creative baptism, liberating him from his felt need to please fans by giving them what they think they want and instead steering into, rather than away from, the original and often deeply weird creative ideas that result.
What Abrams needs is an overseer—what is often known as an editor or producer—who watches his back and tells him when he's gone astray. It turns out Kathleen Kennedy is not that person. After half a decade care-taking the new, post-Lucas Disney period of Star Wars, she has given us a track record by which to judge her. It's not pretty.
For example: Instead of the now-canonical opening lines of the crawl, the obvious opening words should have been, "General Leia Organa has died." Next: "The galaxy gathers to mourn its departed royal leader. Following the Battle of Crait and the mysterious passing of the beloved princess, people flock to join the Resistance in a final push to defeat the evil First Order."
Is that so hard? Abrams thought to honor Carrie Fisher by using preexisting footage of her as Leia paired with digital work to map her face onto other actors in key moments. It doesn't work. It's clunky and forced and, wittingly or not, ends up putting more rather than less weight on her presence in the film.
What else? Poe continues to be little more than Captain Earnest. Finn almost reveals his heart to Rey—the film teases us with it—and then it's left dangling. (Was a scene of closure left on the cutting room floor? Is it meant to pop back up in the next trilogy, when Old Finn and Old Rey lose their only child to the Supremely Final Order's Supremely Supreme Leader, a clone of a clone of Palpatine's great-grandfather's uncle?) At least in part it's left dangling because Abrams decides to make Rey and Ren's final moment a kiss. I'm sure some other version of this film could have sold that, but Abrams certainly does not.
Now, Adam Driver and Daisy Ridley have long been the best things about this third trilogy. And they do excellent work again in their roles. Were it not for them, the movie would be borderline unwatchable. But after Rian Johnson did not so much in The Last Jedi to move their characters along, Abrams freezes them in place until, with a snap of the fingers, their character arcs lurch into their foreordained change/resolution—and off to the climax we go.
Note well: Abrams isn't only copying Lucas (again and again), he's copying himself. The big moment for Ren to turn from evil to good is prompted by nothing so much as an imaginary memory of the last conversation he shared with his father, Han Solo. So Abrams in effect recreates his own scene from VII on the bridge when Ren kills Han—only this time, for no apparent reason, he throws away his lightsaber, having argued himself into it.
Whereas Rey, gone angry and Sith-y because bloodline, turns back to good after a handy chat with Luke in Force Ghost form. And then we're off to the races.
Even writing it out is painful. It's just so, so bad. Not to mention Lando, featuring Billy Dee Williams playing Lando as Billy Dee Williams playing Lando. Or blink-and-you-miss-them new characters like Zorii or Jannah—full of potential, but too little too late. Oh, and Rose, who for no reason at all is relegated to Leia's babysitter. She could and should have been the Lando of this trilogy: introduced mid-chapter, then a fellow front-liner in the finale. Why not include her with the trio of Rey, Finn, and Poe on their (three dozen) mission quests? Just one more way to stick it to Rian Johnson, apparently.
And that's going to be the legacy of this film. Abrams' fundamental failure to understand what The Last Jedi accomplished, and what it made possible. Either Kathleen Kennedy agrees with Abrams in his estimation of that film, or she capitulated to fandom's purported desires (and thus lacks the insight or leadership to steer the ship). Either way, it's quite a thing to behold a billion-dollar franchise lack any semblance of vision across three films made in only a few years' time.
In sum, Episode IX had the opportunity to be something special. Even granting the hand Johnson was dealt (not least the foolish decision to destroy the New Republic, thus reducing the trilogy to a rehash of the original, at least at the formal level), he set up the finale to tell a new story: no Supreme Leader, no Emperor, no Death Star, no consolidated Rebels ready for a Final Mission Once And For All. In Johnson's hands—or Brad Bird's, or Christopher Nolan's, or Tony Gilroy's, or Sam Mendes's, or James Mangold's, or Kathryn Bigelow's, or whomever—it could have been great. It could have been different, new, bold, and unexpected. It could have taken us by surprise, not by resurrecting the ancient dead of the Original Trilogy, but by telling a new story in a new way about these new characters we'd come to care for.
But in Abrams' hands, no one's ever really gone, and there's only ever one story to tell, and re-tell, over and over until the end of time. A pity.
Random Further Comments (Dec 20)
–Is IX worse than the prequels? Not II, which is the worst movie ever made. But it might be worse than I and III. At the very least, it's a discussion. Moments here surpass moments there; but the filmmaking is less shoddy there than it is here. It's all a matter of taste, really. What sort of bad do you prefer your bad movie to be?
–I have long said that J.J. Abrams is the best caster in film. That's his destiny: a producer of blockbuster films who has veto power on all casting decisions. He may also be allowed to touch up dialogue. Otherwise he is not allowed anywhere near a script. He is not allowed to produce or direct films based on preexisting intellectual property. And he is only allowed to direct movies written by someone other than himself.
–After a promising start, Finn's character never got his due across the trilogy. A missed opportunity, in two respects: to explore the psychology of a turncoat stormtrooper, and to consider the moral ambiguity of fighting and killing an opposing military force largely made up of child soldiers (i.e., children kidnapped and brainwashed into service).
–Rey's self-made yellow lightsaber at the end was a nice touch. But as a friend pointed out, it should have been modeled on her long staff: either lengthier in form or double-bladed like Darth Maul's. UPDATE: Turns out I didn't look closely enough. The new lightsaber is made from her long staff, only reduced to a normal size blade handle, rather than fitted into a double-bladed one. Should have spotted it, and that's a solid choice from Abrams et al.
Further Comments Post-Second Viewing (Dec 21)
–I took my oldest to see the movie today, and I have to say, upon second viewing, it was a less frustrating experience. Note well: Not one of the movie's flaws turned out to be something other than a flaw. Everything above stands. But during my first viewing, I thought I would hate to return to the film, even that I might find it unwatchable. It's plenty watchable. And knowing all the terrible script decisions in advance means not audibly groaning at the revelation of each one. That's not nothing, I suppose.
–The second viewing also allowed me to see more of the artistry in the direction, which during the first viewing I found hard to distinguish from the problems in the script (not least since the latter affects the former in numerous ways, especially length of scenes and speed of cutting from one to another). Abrams really is a skilled director of action, emotion, and dialogue. That's what makes the failure of the film painful rather than ho-hum.
–The film was doomed before a single shot was filmed. And it's not micro-elements, it's the macro-frame, the narrative context created from the outset. The three principal mistakes are: Palpatine's return; Rey's lineage; and Leia's central role. All the other flaws (with the possible exception of demoting Rose to a glorified cameo) pale in comparison to those, and could have been incorporated into a quality film. No movie's perfect, after all. This one didn't have to be. It's the deep infrastructure of the story, not particular scenes, that ensure its downfall.
–I was more impressed by John Boyega's performance this time around (and Oscar Isaac's to an extent). Watch him when he's not talking, or just before and after he has dialogue. The reaction shots are priceless. He's never not in character. It only makes the stalling-out of his character arc that much more galling.
–The number of loose threads combined with the amount of yada-yada-ing of plot was glaring on the second viewing. Abrams literally has a character say, "Dark arts. Cloning. Sith jabberwocky..." or some such thing as a comprehensive response to Palpatine being alive. Did they not finish the script? Is there a scene lying on the cutting room floor that resolves Finn's unspoken declaration of love for Rey? Will we ever know?
–Abrams really has issues with killing characters off (also for blowing up entire planets: seven across three films, with two more nearly goners); you can tell by his affinity for fake-killing them and letting them live. M:I:3 opens with the apparent murder of Ethan Hunt's wife—nope. Star Trek Into Darkness ends with Kirk's fake death, brought back to life a few minutes later via Khan 2.0's blood (and also tribbles? It's all so hard to recall the finer points of his scripts). Poe fake-dies in VII, with no real explanation. And in this one both Chewie and Rey "die," only to be not-dead (having never died, in the one case, or been brought back to life, in the other) mere seconds later.
–Must Rey and Ren have kissed at the end? Really? Still not buying it, y'all. (Though that scene at the climax with his face filling the frame, staring "at" Rey, communicating silently, ready to receive the lightsaber: that's a killer. Again, there are solid moments, scenes, and ideas; but in the end it's not just parts that don't add up: the film, finally, is less than the sum of its parts.)
–Having said all that, in the same way that I can enjoy Episodes I or III with my kids, who love Darth Maul and pod-racing and clones and Mace Windu and CGI Yoda and the rest, I was able fully to enjoy IX with my son (his first Star Wars movie in the theater!), who gasped at Rey's family name and laughed at the droid humor and was delighted Ren turned good and found the Emperor truly frightening. (He even leaned over the moment after Rey and Ren kiss, and said, "That's the first time he's ever smiled!") So I'm glad there's that. I wrote above that IX might be worse than I or III. I realize now that's mostly not true. The Phantom Menace is more of an original story, and has genuinely interesting ideas—however poorly executed—and in that respect Lucas has the better of Abrams: the former creates, the latter remixes. But in most other ways IX is superior, also to III. It helps to have sterling actors in gripping roles directed skillfully in gorgeous locales amid haunting atmosphere. If only there were a story to fill it out.
Further Reflections One Week Out (Dec 26)
–Most of my reading, conversations, and encounters in the past week have been with folks who were similarly disappointed with IX, though not always for similar reasons. Even those who have enjoyed it have admitted the shortcomings, vices, or script problems. Less than half a dozen have been unqualified lovers of the film (though I've only found such persons online, not in person—and at least a few appear to have been contrarians who went in knowing the buzz was bad).
But three defenses of the film keep popping up, and they're worth addressing. First: that Palpatine's return was fitting. Second: that superior alternative plots are not forthcoming. Third: that IX is a proper conclusion to the Skywalker–Palpatine trilogy of trilogies as a whole.
Let me take these up together, since they're related to one another, before moving on to some other reflections.
To the second point first: I do think there are superior alternative plots, though the burden of proof need not fall on lowly critics like myself to supply them. Stories are contingent; a thousand things can happen. Fittingness is an art: one says of the fitting conclusion, "Yes, I see now, it could have ended no other way." Abrams wants us to believe that. He's wrong.
But, in any case, sometime in the next week my final(!) update to this ever-expanding post will be an alternative opening crawl and a basic plot line for the film, Palpatine- and lineage-free. (Spoiler: It opens with a royal funeral, and it involves sabotage efforts from without and from within the First Order, including spies.)
But back to the first point: What the defenders miss about the inconveniens of Palpatine's return is not his return full-stop. It is his return out of nowhere, with not a hint or foreshadowing in VII or VIII. It is unfitting because it is a villain deux ex machina—a diabolus ex machina?—wherein the Big Bad, for lack of a better option, is parachuted in to give the story false gravitas it has not otherwise earned and was not naturally heading toward. The truth is that Rian Johnson killed J. J. Abrams' New Big Bad, himself little more than Palpatine Redux, so Abrams did the next best thing: bring back the original. Again, we know this extra-textually, because Abrams made this particular decision upon returning to Star Wars to take over from Colin Treverrow, and neither Treverrow nor Johnson had any inkling of Palpatine's impending resurrection.
This brings us to the third point, that IX works as a sequel capping off the three cycles of Star Wars films. I think some clarity can be shed here by reframing the question one asks. To wit: The following are distinct questions that admit of different answers:
1. Is IX a fitting sequel to VIII?
2. Is IX a fitting conclusion to VII?
3. Is IX a fitting conclusion to IV–VI?
4. Is IX a fitting conclusion to I–III?
Defenders of IX are, so far as I can see, interpreting IX in the light of either The Force Awakens or the prequel trilogy (and thus question #2 or #4). Understood in that way, I can see why they might answer in the affirmative. If the nine-film saga is finally about both the Skywalker and the Palpatine bloodlines, or "houses," then in a way IX works, especially the final act. Moreover, IX works quite well as a kind of direct sequel to VII: the style, the humor, the storytelling, the recycling of tropes, characters, even lines of dialogue: if VII is your jam, you're bound to love IX.
Where IX does not work—at all—is as a sequel to The Last Jedi or as a conclusion to the original trilogy. Regarding the latter, it disentangles and disintegrates the beautiful commingling of the personal and the political, encapsulated perfectly in the final act of Return of the Jedi when the throne room scene functions as both an intra-family drama and a microcosmic battle upon which the fate of the entire galaxy hangs. And this is itself the culmination of a three-film discovery of this very entanglement: Luke is an orphan whose father was murdered long ago, only to learn the would-be murderer is his father, only to realize the princess he sought to help is his own sister. And to save his father he must be saved by him, thus destroying the Emperor, thus destroying the Empire: this is not merely to restore balance to the Force and to bring peace to the galaxy but to restore order and bring peace to his own family.
But IX undoes this: the New Republic is annihilated at the drop of a hat by a Death Star 3.0; the Empire is resurrected as the First Order; the Emperor is resurrected as ... himself. (All this has happened before, and all this will happen again.) But IX gives us no reason to think, with Rey's supposed victory, that days or weeks or months later later it's not all going to play out again, exactly as before. When Luke throws away his lightsaber and Vader throws the Emperor down the shaft, the elation and catharsis we originally felt and are meant still to feel at the simultaneous personal and political victory just accomplished has been reduced to the personal alone.
And as for VIII: The Rise of Skywalker is not just a failure of a sequel to The Last Jedi—at every level—it is a sort of anti-sequel. It actively, intentionally, and painfully seeks at every turn to undo the plot, themes, and character arcs of the previous film. And even if your judgment of VIII is that it is something less than superlative (a misguided judgment, but one I'll permit for now), surely you must admit that a direct sequel that so clearly hates its predecessor, laboring with all its might to unburden itself of its inheritance, is a recipe for narrative confusion, incoherence, and sloppiness. Fittingness is about beauty, harmony, and order. The only honest feeling one can have going from TLJ to TROS is whiplash. Whomever is to blame for that (Kathleen Kennedy above all, also Abrams, but not at all Johnson, who was given free reign and no heads up for what was to come), the result is a fracture within this final trilogy that weighs heavily on the ability of IX to perform its duty as triple finale: sequel/conclusion at once to Rey, to Luke and Leia, and to Anakin. It buckles and breaks under the weight—one chosen by the writer-director, not forced upon him.
–Another word about Abrams: the man loves his narrative cheats and work-arounds. His worst vice is impatience. Kirk must be promoted immediately. Rey must already be a brilliant pilot and swordsman. Finn can be a general just like that. Need a master code-breaker? Poe knows a gal. Need to get to Exogol to stop the super-duper-master-fleet-each-as-powerful-as-a-death-star within 16 hours, plus round up the entire galaxy to come help? Sure, why not.
Worse than this abbreviated storytelling tic, Abrams as often as not refuses put in the time or the effort to earn our affection or trust for his characters or plot beats. Rather, he works with borrowed emotional capital. He knows how we already feel about Han or Leia or Chewie or Luke, and he uses that to his advantage. He brings in a stray "memory" to turn the lead villain into a hero. He has the good guys and bad guys keep flying the same ships, in the same planet locales, wearing the same suits, with the same droids along for the ride: and this time, no one can die, not for good. Does he need to create a new character to rally the troops for a final battle? Nope. Billy Dee Williams will do the job nicely. Should Benedict Cumberbatch play a heretofore unknown adversary for Kirk, Spock, & co.? Nah: Khan 2.0 is what the people want. And you've got to give the people what they want.
–I continue to lament the unintelligible and finally uninteresting character arc for poor FN-2187. If VII, following his desertion, was about Finn's struggle with a kind of cowardice—running away from evil and danger—and VIII about wrestling with recklessness—now running straight into danger and certain death—then IX should have concerned the mean of those two extremes: what it means for this child-soldier turncoat to be courageous, to embody the virtue of bravery.
Instead, we get a replay of both VII and VIII with Finn being funny but Rey-obsessed while also attempting, again, a suicide mission (only this time not kept from following through with it by Rose). And though Abrams has said in interviews that the film was meant to reveal that Finn, too, is Force-sensitive, and that this is what he wanted to tell Rey, this is far from evident in the film, and the half-hearted attempt to address his affection for Rey is clumsy at best (not to mention, for the thousandth time, acting as if Rose, i.e. Abrams' stand-in for VIII/Johnson, doesn't exist). What a missed opportunity.
–If Finn stands for the virtue of courage, it seems to me that Poe stands for prudence: a virtue he learned in the previous film, but which he must re-learn for Abrams here, since what happens in TLJ stays in TLJ. But what if Poe were less earnest in this film, less impulsive and soul-searching and navel-gazing, and more of a straightforward, prudent, wise leader?
As for Rey: I want to say her virtue is justice, paired with religio. Rendering what is due to those to whom it is due in proper proportion, while honoring, with an appropriate piety, those to whom one is indebted—including, in her case, Rey's literal parents, but also and especially her adoptive family: the Resistance, on the one hand, and the Jedi, on the other. (Though I don't love "Rey Skywalker" at the end, I will allow it on this reading.)
–About Kylo Ren's turn. It was not necessary, much less inevitable, narratively, that he break good like his grandfather. Johnson posed this question in VIII: what if Ben Solo were offered the opportunity, considered it, but turned away (whether in weakness or in malice)? That is a story that could have been told well.
A story that involves his repentance from evil also could have been told well. That didn't happen here, not least because the turn is a cheat: saved from death, his own memory converts him. What's more, the film has him simply become Good at that point. But the brilliance of Vader's turn in VI isn't that he goes from Evil to Good at the drop of a hat. Rather, he lets his paternal love for his son overthrow his willingness to cooperate with evil—and only thus does he turn on Palpatine.
Adam Driver's performance overcomes Abrams' deficiencies as a writer here, but the intriguing possibility here was for a transformation that is only partial: so that Ren's conflicted badness—"Millennial Darth" playing dress-up but unable fully to embrace evil's true depths—becomes a conflicted goodness: love and devotion to Rey, perhaps, but not necessarily to her cause or her friends or the ends and virtues she stands and fights for. That is more dramatically interesting, truer to the character, and would have made for a fascinating open-ended "ending": Ben Solo, reconciled to Rey but not to himself or to what she loves. What does the future hold for such a pair? An unstable settlement, for sure, and one less happy-clappy kumbaya in the way that IX wants to repeat VI.
–Speaking of Ren and Rey, I'm increasingly dissatisfied by that kiss. I like the continued theme of attachment overcoming detachment, even as attachment presents the greater temptation for disordered loves and thus for fall into evil. Thus is Anakin lured to the Dark Side by Palpatine via his dysfunctional love for Padme; but thus also is Anakin redeemed by Luke's well-ordered detachment (willingness to die, lightsaber thrown away) rooted in proper attachment (love for his father and sister, unwillingness to kill in anger). This very balance of detachment within attachment is undone in Luke's fear (in the flashback of The Last Jedi) of what the young Ben Solo is capable of: and this unbalance tips Ben over to the Dark Side. What brings him back to the Light is Rey, who she is and what she does, and his, Ben's, overpowering love for her.
But is that love eros? It certainly isn't familial. It seems to me that, even at the level of the text of the films (VII and IX on their own, but also VIII), it isn't eros, either, but rather philia. The love of Rey for Ren and of Ren for Rey is one of friendship. That is itself one of the under-discussed themes of Star Wars, whereby the personal is wedded to the political not through family alone but through the power of friends to band together in the face of unimaginable power and terrible odds. (Perhaps the great failure of the prequel trilogy is its inability to depict friendship well, chiefly between Obi-Wan and Anakin. Not for lack of trying...)
But perhaps I'm not giving Abrams his due. On his terms, the eros of the new trilogy echoes and recapitulates the tragic eros of the prequel trilogy, with filial love anchoring the original trilogy and friendship uniting all three. I prefer my alternative, however, since it is isn't so much a failure of eros as one of philia that prompts Anakin's fall in Revenge of the Sith: his inability to be a friend to Obi-Wan, in truth and in justice. In which case, the redemption of Anakin's grandson by (ugh) Palpatine's granddaughter comes about by that very love whose lack doomed Anakin in the first place. Going forward, in fact, I think I will choose to read IX in this way, regardless of Abrams' intentions, since Ben and Rey's kiss can be interpreted as a kind of exuberant exclamation point in the Hamlet-esque final moments of their ostensible shared deathbed.
A silver lining, that, in an otherwise diverting but finally disappointing denouement to the Skywalker saga. If nothing else, Abrams always delivers by forcing his audience repeatedly to ask a single, lingering question of all of his films. That is, what might have been.
I thought I'd avoid writing about the film, but instead of spending time on Twitter or Slack, let me just share my thoughts here.
What makes The Rise of Skywalker so bad? Well, there are multiple levels of badness involved.
[SPOILERS HEREON Y'ALL.]
First is the filmmaking itself. This was the most shocking thing about IX. I knew Abrams would go for nostalgia and servicing fandom. I figured he'd undermine VIII. I didn't know he would make such a straightforwardly bad movie, one alternately boring (the guy next to me on opening night fell asleep) and poorly told (my wife can't be the only one who found it difficult to follow).
The opening 30 minutes in particular move so fast, across so many worlds and plot points and characters old and new, with such flat-footed awkwardness, that it feels as if Abrams stepped in as director to replace another director who had already filmed all this. But that's not what happened. It's all him. Working for Disney, that trillion-dollar behemoth with all the money and time in the world to let Abrams make the best movie he possibly he could. And the result is shoddy beyond belief.
There are moments of elegance or grandeur. Rey's duel with Ren in his TIE fighter. The unexpected, graceful healing of the serpent beneath the sands. (Nice nod to The Mandalorian, that.) The ocean duel. The Sith coliseum of dead souls, the undead Emperor upheld by a black claw. Finn and Poe shooting up a Star Destroyer while running towards a low-angled tracking camera. The lightsaber "swap" from Rey to Ren (the one good and successful extension of a Rian Johnson idea). The shocking accidental death of Chewbacca at Rey's hands—
Whoops! I forgot, nobody dies in Star Wars.
And there's the rub. The problem is the script. It feels like it was written by committee in a succession of a dozen drafts. The result, at least until the final act, comes across like a series of videogame player quests. The doohickeys sought don't matter in themselves. They're just the next required token to level up and receive the next assignment.
The other problem with the script is its bottomless well of bad ideas. Hux is a spy! (Dead Hux. Dad Hux!) Leia retconned into a Jedi! Leia as Rey's true Jedi Master! A fleet of thousands of Star Destroyers, each as powerful as a Death Star—yet unable to move without being directed by a single frail antenna! (Dracula rules, as ever.)
Worst of all, Palpatine as Rey's grandfather and inexplicably alive after being thrown to his death by Vader in VI and the secret Blofeld-like master-puppeteer behind Snoke and the First Order. (Abrams: The first order? THE FINAL ORDER!) The opening words of the crawl read, "The dead speak!" A nice B-movie callback to the saga's origins. But also a foolish, self-parodying decision by a writer-director who quite literally has done nothing but make sequels and remakes for his entire career. The man does not know anything except what he watched as an adolescent. Sometimes he remixes it well. Here, he does not.
What Abrams needs is whatever happened to Lindelof while making The Leftovers. A kind of creative baptism, liberating him from his felt need to please fans by giving them what they think they want and instead steering into, rather than away from, the original and often deeply weird creative ideas that result.
What Abrams needs is an overseer—what is often known as an editor or producer—who watches his back and tells him when he's gone astray. It turns out Kathleen Kennedy is not that person. After half a decade care-taking the new, post-Lucas Disney period of Star Wars, she has given us a track record by which to judge her. It's not pretty.
For example: Instead of the now-canonical opening lines of the crawl, the obvious opening words should have been, "General Leia Organa has died." Next: "The galaxy gathers to mourn its departed royal leader. Following the Battle of Crait and the mysterious passing of the beloved princess, people flock to join the Resistance in a final push to defeat the evil First Order."
Is that so hard? Abrams thought to honor Carrie Fisher by using preexisting footage of her as Leia paired with digital work to map her face onto other actors in key moments. It doesn't work. It's clunky and forced and, wittingly or not, ends up putting more rather than less weight on her presence in the film.
What else? Poe continues to be little more than Captain Earnest. Finn almost reveals his heart to Rey—the film teases us with it—and then it's left dangling. (Was a scene of closure left on the cutting room floor? Is it meant to pop back up in the next trilogy, when Old Finn and Old Rey lose their only child to the Supremely Final Order's Supremely Supreme Leader, a clone of a clone of Palpatine's great-grandfather's uncle?) At least in part it's left dangling because Abrams decides to make Rey and Ren's final moment a kiss. I'm sure some other version of this film could have sold that, but Abrams certainly does not.
Now, Adam Driver and Daisy Ridley have long been the best things about this third trilogy. And they do excellent work again in their roles. Were it not for them, the movie would be borderline unwatchable. But after Rian Johnson did not so much in The Last Jedi to move their characters along, Abrams freezes them in place until, with a snap of the fingers, their character arcs lurch into their foreordained change/resolution—and off to the climax we go.
Note well: Abrams isn't only copying Lucas (again and again), he's copying himself. The big moment for Ren to turn from evil to good is prompted by nothing so much as an imaginary memory of the last conversation he shared with his father, Han Solo. So Abrams in effect recreates his own scene from VII on the bridge when Ren kills Han—only this time, for no apparent reason, he throws away his lightsaber, having argued himself into it.
Whereas Rey, gone angry and Sith-y because bloodline, turns back to good after a handy chat with Luke in Force Ghost form. And then we're off to the races.
Even writing it out is painful. It's just so, so bad. Not to mention Lando, featuring Billy Dee Williams playing Lando as Billy Dee Williams playing Lando. Or blink-and-you-miss-them new characters like Zorii or Jannah—full of potential, but too little too late. Oh, and Rose, who for no reason at all is relegated to Leia's babysitter. She could and should have been the Lando of this trilogy: introduced mid-chapter, then a fellow front-liner in the finale. Why not include her with the trio of Rey, Finn, and Poe on their (three dozen) mission quests? Just one more way to stick it to Rian Johnson, apparently.
And that's going to be the legacy of this film. Abrams' fundamental failure to understand what The Last Jedi accomplished, and what it made possible. Either Kathleen Kennedy agrees with Abrams in his estimation of that film, or she capitulated to fandom's purported desires (and thus lacks the insight or leadership to steer the ship). Either way, it's quite a thing to behold a billion-dollar franchise lack any semblance of vision across three films made in only a few years' time.
In sum, Episode IX had the opportunity to be something special. Even granting the hand Johnson was dealt (not least the foolish decision to destroy the New Republic, thus reducing the trilogy to a rehash of the original, at least at the formal level), he set up the finale to tell a new story: no Supreme Leader, no Emperor, no Death Star, no consolidated Rebels ready for a Final Mission Once And For All. In Johnson's hands—or Brad Bird's, or Christopher Nolan's, or Tony Gilroy's, or Sam Mendes's, or James Mangold's, or Kathryn Bigelow's, or whomever—it could have been great. It could have been different, new, bold, and unexpected. It could have taken us by surprise, not by resurrecting the ancient dead of the Original Trilogy, but by telling a new story in a new way about these new characters we'd come to care for.
But in Abrams' hands, no one's ever really gone, and there's only ever one story to tell, and re-tell, over and over until the end of time. A pity.
Random Further Comments (Dec 20)
–Is IX worse than the prequels? Not II, which is the worst movie ever made. But it might be worse than I and III. At the very least, it's a discussion. Moments here surpass moments there; but the filmmaking is less shoddy there than it is here. It's all a matter of taste, really. What sort of bad do you prefer your bad movie to be?
–I have long said that J.J. Abrams is the best caster in film. That's his destiny: a producer of blockbuster films who has veto power on all casting decisions. He may also be allowed to touch up dialogue. Otherwise he is not allowed anywhere near a script. He is not allowed to produce or direct films based on preexisting intellectual property. And he is only allowed to direct movies written by someone other than himself.
–After a promising start, Finn's character never got his due across the trilogy. A missed opportunity, in two respects: to explore the psychology of a turncoat stormtrooper, and to consider the moral ambiguity of fighting and killing an opposing military force largely made up of child soldiers (i.e., children kidnapped and brainwashed into service).
–Rey's self-made yellow lightsaber at the end was a nice touch. But as a friend pointed out, it should have been modeled on her long staff: either lengthier in form or double-bladed like Darth Maul's. UPDATE: Turns out I didn't look closely enough. The new lightsaber is made from her long staff, only reduced to a normal size blade handle, rather than fitted into a double-bladed one. Should have spotted it, and that's a solid choice from Abrams et al.
Further Comments Post-Second Viewing (Dec 21)
–I took my oldest to see the movie today, and I have to say, upon second viewing, it was a less frustrating experience. Note well: Not one of the movie's flaws turned out to be something other than a flaw. Everything above stands. But during my first viewing, I thought I would hate to return to the film, even that I might find it unwatchable. It's plenty watchable. And knowing all the terrible script decisions in advance means not audibly groaning at the revelation of each one. That's not nothing, I suppose.
–The second viewing also allowed me to see more of the artistry in the direction, which during the first viewing I found hard to distinguish from the problems in the script (not least since the latter affects the former in numerous ways, especially length of scenes and speed of cutting from one to another). Abrams really is a skilled director of action, emotion, and dialogue. That's what makes the failure of the film painful rather than ho-hum.
–The film was doomed before a single shot was filmed. And it's not micro-elements, it's the macro-frame, the narrative context created from the outset. The three principal mistakes are: Palpatine's return; Rey's lineage; and Leia's central role. All the other flaws (with the possible exception of demoting Rose to a glorified cameo) pale in comparison to those, and could have been incorporated into a quality film. No movie's perfect, after all. This one didn't have to be. It's the deep infrastructure of the story, not particular scenes, that ensure its downfall.
–I was more impressed by John Boyega's performance this time around (and Oscar Isaac's to an extent). Watch him when he's not talking, or just before and after he has dialogue. The reaction shots are priceless. He's never not in character. It only makes the stalling-out of his character arc that much more galling.
–The number of loose threads combined with the amount of yada-yada-ing of plot was glaring on the second viewing. Abrams literally has a character say, "Dark arts. Cloning. Sith jabberwocky..." or some such thing as a comprehensive response to Palpatine being alive. Did they not finish the script? Is there a scene lying on the cutting room floor that resolves Finn's unspoken declaration of love for Rey? Will we ever know?
–Abrams really has issues with killing characters off (also for blowing up entire planets: seven across three films, with two more nearly goners); you can tell by his affinity for fake-killing them and letting them live. M:I:3 opens with the apparent murder of Ethan Hunt's wife—nope. Star Trek Into Darkness ends with Kirk's fake death, brought back to life a few minutes later via Khan 2.0's blood (and also tribbles? It's all so hard to recall the finer points of his scripts). Poe fake-dies in VII, with no real explanation. And in this one both Chewie and Rey "die," only to be not-dead (having never died, in the one case, or been brought back to life, in the other) mere seconds later.
–Must Rey and Ren have kissed at the end? Really? Still not buying it, y'all. (Though that scene at the climax with his face filling the frame, staring "at" Rey, communicating silently, ready to receive the lightsaber: that's a killer. Again, there are solid moments, scenes, and ideas; but in the end it's not just parts that don't add up: the film, finally, is less than the sum of its parts.)
–Having said all that, in the same way that I can enjoy Episodes I or III with my kids, who love Darth Maul and pod-racing and clones and Mace Windu and CGI Yoda and the rest, I was able fully to enjoy IX with my son (his first Star Wars movie in the theater!), who gasped at Rey's family name and laughed at the droid humor and was delighted Ren turned good and found the Emperor truly frightening. (He even leaned over the moment after Rey and Ren kiss, and said, "That's the first time he's ever smiled!") So I'm glad there's that. I wrote above that IX might be worse than I or III. I realize now that's mostly not true. The Phantom Menace is more of an original story, and has genuinely interesting ideas—however poorly executed—and in that respect Lucas has the better of Abrams: the former creates, the latter remixes. But in most other ways IX is superior, also to III. It helps to have sterling actors in gripping roles directed skillfully in gorgeous locales amid haunting atmosphere. If only there were a story to fill it out.
Further Reflections One Week Out (Dec 26)
–Most of my reading, conversations, and encounters in the past week have been with folks who were similarly disappointed with IX, though not always for similar reasons. Even those who have enjoyed it have admitted the shortcomings, vices, or script problems. Less than half a dozen have been unqualified lovers of the film (though I've only found such persons online, not in person—and at least a few appear to have been contrarians who went in knowing the buzz was bad).
But three defenses of the film keep popping up, and they're worth addressing. First: that Palpatine's return was fitting. Second: that superior alternative plots are not forthcoming. Third: that IX is a proper conclusion to the Skywalker–Palpatine trilogy of trilogies as a whole.
Let me take these up together, since they're related to one another, before moving on to some other reflections.
To the second point first: I do think there are superior alternative plots, though the burden of proof need not fall on lowly critics like myself to supply them. Stories are contingent; a thousand things can happen. Fittingness is an art: one says of the fitting conclusion, "Yes, I see now, it could have ended no other way." Abrams wants us to believe that. He's wrong.
But, in any case, sometime in the next week my final(!) update to this ever-expanding post will be an alternative opening crawl and a basic plot line for the film, Palpatine- and lineage-free. (Spoiler: It opens with a royal funeral, and it involves sabotage efforts from without and from within the First Order, including spies.)
But back to the first point: What the defenders miss about the inconveniens of Palpatine's return is not his return full-stop. It is his return out of nowhere, with not a hint or foreshadowing in VII or VIII. It is unfitting because it is a villain deux ex machina—a diabolus ex machina?—wherein the Big Bad, for lack of a better option, is parachuted in to give the story false gravitas it has not otherwise earned and was not naturally heading toward. The truth is that Rian Johnson killed J. J. Abrams' New Big Bad, himself little more than Palpatine Redux, so Abrams did the next best thing: bring back the original. Again, we know this extra-textually, because Abrams made this particular decision upon returning to Star Wars to take over from Colin Treverrow, and neither Treverrow nor Johnson had any inkling of Palpatine's impending resurrection.
This brings us to the third point, that IX works as a sequel capping off the three cycles of Star Wars films. I think some clarity can be shed here by reframing the question one asks. To wit: The following are distinct questions that admit of different answers:
1. Is IX a fitting sequel to VIII?
2. Is IX a fitting conclusion to VII?
3. Is IX a fitting conclusion to IV–VI?
4. Is IX a fitting conclusion to I–III?
Defenders of IX are, so far as I can see, interpreting IX in the light of either The Force Awakens or the prequel trilogy (and thus question #2 or #4). Understood in that way, I can see why they might answer in the affirmative. If the nine-film saga is finally about both the Skywalker and the Palpatine bloodlines, or "houses," then in a way IX works, especially the final act. Moreover, IX works quite well as a kind of direct sequel to VII: the style, the humor, the storytelling, the recycling of tropes, characters, even lines of dialogue: if VII is your jam, you're bound to love IX.
Where IX does not work—at all—is as a sequel to The Last Jedi or as a conclusion to the original trilogy. Regarding the latter, it disentangles and disintegrates the beautiful commingling of the personal and the political, encapsulated perfectly in the final act of Return of the Jedi when the throne room scene functions as both an intra-family drama and a microcosmic battle upon which the fate of the entire galaxy hangs. And this is itself the culmination of a three-film discovery of this very entanglement: Luke is an orphan whose father was murdered long ago, only to learn the would-be murderer is his father, only to realize the princess he sought to help is his own sister. And to save his father he must be saved by him, thus destroying the Emperor, thus destroying the Empire: this is not merely to restore balance to the Force and to bring peace to the galaxy but to restore order and bring peace to his own family.
But IX undoes this: the New Republic is annihilated at the drop of a hat by a Death Star 3.0; the Empire is resurrected as the First Order; the Emperor is resurrected as ... himself. (All this has happened before, and all this will happen again.) But IX gives us no reason to think, with Rey's supposed victory, that days or weeks or months later later it's not all going to play out again, exactly as before. When Luke throws away his lightsaber and Vader throws the Emperor down the shaft, the elation and catharsis we originally felt and are meant still to feel at the simultaneous personal and political victory just accomplished has been reduced to the personal alone.
And as for VIII: The Rise of Skywalker is not just a failure of a sequel to The Last Jedi—at every level—it is a sort of anti-sequel. It actively, intentionally, and painfully seeks at every turn to undo the plot, themes, and character arcs of the previous film. And even if your judgment of VIII is that it is something less than superlative (a misguided judgment, but one I'll permit for now), surely you must admit that a direct sequel that so clearly hates its predecessor, laboring with all its might to unburden itself of its inheritance, is a recipe for narrative confusion, incoherence, and sloppiness. Fittingness is about beauty, harmony, and order. The only honest feeling one can have going from TLJ to TROS is whiplash. Whomever is to blame for that (Kathleen Kennedy above all, also Abrams, but not at all Johnson, who was given free reign and no heads up for what was to come), the result is a fracture within this final trilogy that weighs heavily on the ability of IX to perform its duty as triple finale: sequel/conclusion at once to Rey, to Luke and Leia, and to Anakin. It buckles and breaks under the weight—one chosen by the writer-director, not forced upon him.
–Another word about Abrams: the man loves his narrative cheats and work-arounds. His worst vice is impatience. Kirk must be promoted immediately. Rey must already be a brilliant pilot and swordsman. Finn can be a general just like that. Need a master code-breaker? Poe knows a gal. Need to get to Exogol to stop the super-duper-master-fleet-each-as-powerful-as-a-death-star within 16 hours, plus round up the entire galaxy to come help? Sure, why not.
Worse than this abbreviated storytelling tic, Abrams as often as not refuses put in the time or the effort to earn our affection or trust for his characters or plot beats. Rather, he works with borrowed emotional capital. He knows how we already feel about Han or Leia or Chewie or Luke, and he uses that to his advantage. He brings in a stray "memory" to turn the lead villain into a hero. He has the good guys and bad guys keep flying the same ships, in the same planet locales, wearing the same suits, with the same droids along for the ride: and this time, no one can die, not for good. Does he need to create a new character to rally the troops for a final battle? Nope. Billy Dee Williams will do the job nicely. Should Benedict Cumberbatch play a heretofore unknown adversary for Kirk, Spock, & co.? Nah: Khan 2.0 is what the people want. And you've got to give the people what they want.
–I continue to lament the unintelligible and finally uninteresting character arc for poor FN-2187. If VII, following his desertion, was about Finn's struggle with a kind of cowardice—running away from evil and danger—and VIII about wrestling with recklessness—now running straight into danger and certain death—then IX should have concerned the mean of those two extremes: what it means for this child-soldier turncoat to be courageous, to embody the virtue of bravery.
Instead, we get a replay of both VII and VIII with Finn being funny but Rey-obsessed while also attempting, again, a suicide mission (only this time not kept from following through with it by Rose). And though Abrams has said in interviews that the film was meant to reveal that Finn, too, is Force-sensitive, and that this is what he wanted to tell Rey, this is far from evident in the film, and the half-hearted attempt to address his affection for Rey is clumsy at best (not to mention, for the thousandth time, acting as if Rose, i.e. Abrams' stand-in for VIII/Johnson, doesn't exist). What a missed opportunity.
–If Finn stands for the virtue of courage, it seems to me that Poe stands for prudence: a virtue he learned in the previous film, but which he must re-learn for Abrams here, since what happens in TLJ stays in TLJ. But what if Poe were less earnest in this film, less impulsive and soul-searching and navel-gazing, and more of a straightforward, prudent, wise leader?
As for Rey: I want to say her virtue is justice, paired with religio. Rendering what is due to those to whom it is due in proper proportion, while honoring, with an appropriate piety, those to whom one is indebted—including, in her case, Rey's literal parents, but also and especially her adoptive family: the Resistance, on the one hand, and the Jedi, on the other. (Though I don't love "Rey Skywalker" at the end, I will allow it on this reading.)
–About Kylo Ren's turn. It was not necessary, much less inevitable, narratively, that he break good like his grandfather. Johnson posed this question in VIII: what if Ben Solo were offered the opportunity, considered it, but turned away (whether in weakness or in malice)? That is a story that could have been told well.
A story that involves his repentance from evil also could have been told well. That didn't happen here, not least because the turn is a cheat: saved from death, his own memory converts him. What's more, the film has him simply become Good at that point. But the brilliance of Vader's turn in VI isn't that he goes from Evil to Good at the drop of a hat. Rather, he lets his paternal love for his son overthrow his willingness to cooperate with evil—and only thus does he turn on Palpatine.
Adam Driver's performance overcomes Abrams' deficiencies as a writer here, but the intriguing possibility here was for a transformation that is only partial: so that Ren's conflicted badness—"Millennial Darth" playing dress-up but unable fully to embrace evil's true depths—becomes a conflicted goodness: love and devotion to Rey, perhaps, but not necessarily to her cause or her friends or the ends and virtues she stands and fights for. That is more dramatically interesting, truer to the character, and would have made for a fascinating open-ended "ending": Ben Solo, reconciled to Rey but not to himself or to what she loves. What does the future hold for such a pair? An unstable settlement, for sure, and one less happy-clappy kumbaya in the way that IX wants to repeat VI.
–Speaking of Ren and Rey, I'm increasingly dissatisfied by that kiss. I like the continued theme of attachment overcoming detachment, even as attachment presents the greater temptation for disordered loves and thus for fall into evil. Thus is Anakin lured to the Dark Side by Palpatine via his dysfunctional love for Padme; but thus also is Anakin redeemed by Luke's well-ordered detachment (willingness to die, lightsaber thrown away) rooted in proper attachment (love for his father and sister, unwillingness to kill in anger). This very balance of detachment within attachment is undone in Luke's fear (in the flashback of The Last Jedi) of what the young Ben Solo is capable of: and this unbalance tips Ben over to the Dark Side. What brings him back to the Light is Rey, who she is and what she does, and his, Ben's, overpowering love for her.
But is that love eros? It certainly isn't familial. It seems to me that, even at the level of the text of the films (VII and IX on their own, but also VIII), it isn't eros, either, but rather philia. The love of Rey for Ren and of Ren for Rey is one of friendship. That is itself one of the under-discussed themes of Star Wars, whereby the personal is wedded to the political not through family alone but through the power of friends to band together in the face of unimaginable power and terrible odds. (Perhaps the great failure of the prequel trilogy is its inability to depict friendship well, chiefly between Obi-Wan and Anakin. Not for lack of trying...)
But perhaps I'm not giving Abrams his due. On his terms, the eros of the new trilogy echoes and recapitulates the tragic eros of the prequel trilogy, with filial love anchoring the original trilogy and friendship uniting all three. I prefer my alternative, however, since it is isn't so much a failure of eros as one of philia that prompts Anakin's fall in Revenge of the Sith: his inability to be a friend to Obi-Wan, in truth and in justice. In which case, the redemption of Anakin's grandson by (ugh) Palpatine's granddaughter comes about by that very love whose lack doomed Anakin in the first place. Going forward, in fact, I think I will choose to read IX in this way, regardless of Abrams' intentions, since Ben and Rey's kiss can be interpreted as a kind of exuberant exclamation point in the Hamlet-esque final moments of their ostensible shared deathbed.
A silver lining, that, in an otherwise diverting but finally disappointing denouement to the Skywalker saga. If nothing else, Abrams always delivers by forcing his audience repeatedly to ask a single, lingering question of all of his films. That is, what might have been.
The 11 Best Hour-Long TV Dramas of the Decade (2010–2019)
A few months back I posted this list to Twitter, but I thought I'd re-post it here, with a bit more commentary, as well as a reshuffling due to Mr. Robot's outstanding fourth season.
First, to the rules. This is a list of hour-long dramas: so no half-hour genre-exploders (Atlanta, Louie) or comedies (Parks and Rec, Brooklyn 99). I'm also only thinking of TV series, with discrete seasons that tell something of a unified narrative: thus excluding miniseries (e.g. The Honourable Woman) and specialty shows (a la Sherlock or Black Mirror). Further, in order to qualify the series must have at least three seasons to its name (so The Knick falls short and both Succession and Yellowstone ran out of time before decade's end). Seasons prior to 2010, however—such as Mad Men's first three or Breaking Bad's first two—don't count for the purposes of this list. I am solely considering television seasons comprising hour-long dramatic episodes shown or streamed between 2010 and 2019.
Now to the list:
1. Rectify (SundanceTV, 2013–2016)
2. The Americans (FX, 2013–2018)
3. Breaking Bad (AMC, 2008–2013)
4. The Leftovers (HBO, 2014–2017)
5. Better Call Saul (AMC, 2015–)
6. Mad Men (AMC, 2007–2015)
7. Game of Thrones (HBO, 2011–2019)
8. Mr. Robot (USA, 2015–2019)
9. Justified (FX, 2010–2015)
10. Fargo (FX, 2014–)
11. The Expanse (SyFy/Amazon, 2015–)
Comments:
–My, that's a M-A-N-L-Y list. No apologies—one is who one is, one likes what one likes—but I'm not blind to it.
–Some shows got the cut due to waning quality in later years: I'm looking at you, The Good Wife, and you too, Orange is the New Black.
–Others were marked by high highs matched only by equally low lows: e.g. Homeland, True Detective.
–Consulting my annual lists, I was reminded of Boardwalk Empire, which is sorely underrated. The fourth season is up there for single-season masterpieces. But I'll never be able to shake Matt Zoller Seitz's comment, when he reviewed the short-lived series Boss, that the character Nucky Thompson should have been played by Kelsey Grammar. The show becomes an immediate classic in that alternate universe.
–Hannibal! A real show that really played on NBC—NBC!—for three—three!—seasons! That second season, y'all.
–You know, I never got around to watching the final season of Halt & Catch Fire. An unjustly overlooked show, beloved by none but critics. But the fact that I just never quite found myself needing to finish the story might say something. About the show, or about me, at least.
–It would be easy enough to keep the list to a clean ten and leave off The Expanse. But it just got too good in those second and third seasons, I couldn't do it.
–Were it not for Mr. Robot's second season, I might have been willing to move it up to the top five. Alas.
–Game of Thrones is so strange. Those last couple seasons were so dreadful overall (fun at times, but almost always stupidly silly), and the series was far from flawless in the first six. But the sheer narrative scope, the quality of the source material, the heft of the story and acting, the excellence (at times) of the writers' ability to juggle so much so deftly, and, man, those big moments: it still deserves much of the awe it garnered.
–For me, at least, separating rankings by time limit and/or genre makes things so much easier than it would otherwise be. How are you supposed to compare Mad Men to Parks & Rec, or Veep to Mr. Robot? But once you sort for genre and running time, the top 10-20 dramas more or less sort themselves.
–Watch Rectify. It may well be the only TV show—given my predilections to tell people to turn their screens off, not on—that I suggest people ought to watch, and without reservation. It's that good.
First, to the rules. This is a list of hour-long dramas: so no half-hour genre-exploders (Atlanta, Louie) or comedies (Parks and Rec, Brooklyn 99). I'm also only thinking of TV series, with discrete seasons that tell something of a unified narrative: thus excluding miniseries (e.g. The Honourable Woman) and specialty shows (a la Sherlock or Black Mirror). Further, in order to qualify the series must have at least three seasons to its name (so The Knick falls short and both Succession and Yellowstone ran out of time before decade's end). Seasons prior to 2010, however—such as Mad Men's first three or Breaking Bad's first two—don't count for the purposes of this list. I am solely considering television seasons comprising hour-long dramatic episodes shown or streamed between 2010 and 2019.
Now to the list:
1. Rectify (SundanceTV, 2013–2016)
2. The Americans (FX, 2013–2018)
3. Breaking Bad (AMC, 2008–2013)
4. The Leftovers (HBO, 2014–2017)
5. Better Call Saul (AMC, 2015–)
6. Mad Men (AMC, 2007–2015)
7. Game of Thrones (HBO, 2011–2019)
8. Mr. Robot (USA, 2015–2019)
9. Justified (FX, 2010–2015)
10. Fargo (FX, 2014–)
11. The Expanse (SyFy/Amazon, 2015–)
Comments:
–My, that's a M-A-N-L-Y list. No apologies—one is who one is, one likes what one likes—but I'm not blind to it.
–Some shows got the cut due to waning quality in later years: I'm looking at you, The Good Wife, and you too, Orange is the New Black.
–Others were marked by high highs matched only by equally low lows: e.g. Homeland, True Detective.
–Consulting my annual lists, I was reminded of Boardwalk Empire, which is sorely underrated. The fourth season is up there for single-season masterpieces. But I'll never be able to shake Matt Zoller Seitz's comment, when he reviewed the short-lived series Boss, that the character Nucky Thompson should have been played by Kelsey Grammar. The show becomes an immediate classic in that alternate universe.
–Hannibal! A real show that really played on NBC—NBC!—for three—three!—seasons! That second season, y'all.
–You know, I never got around to watching the final season of Halt & Catch Fire. An unjustly overlooked show, beloved by none but critics. But the fact that I just never quite found myself needing to finish the story might say something. About the show, or about me, at least.
–It would be easy enough to keep the list to a clean ten and leave off The Expanse. But it just got too good in those second and third seasons, I couldn't do it.
–Were it not for Mr. Robot's second season, I might have been willing to move it up to the top five. Alas.
–Game of Thrones is so strange. Those last couple seasons were so dreadful overall (fun at times, but almost always stupidly silly), and the series was far from flawless in the first six. But the sheer narrative scope, the quality of the source material, the heft of the story and acting, the excellence (at times) of the writers' ability to juggle so much so deftly, and, man, those big moments: it still deserves much of the awe it garnered.
–For me, at least, separating rankings by time limit and/or genre makes things so much easier than it would otherwise be. How are you supposed to compare Mad Men to Parks & Rec, or Veep to Mr. Robot? But once you sort for genre and running time, the top 10-20 dramas more or less sort themselves.
–Watch Rectify. It may well be the only TV show—given my predilections to tell people to turn their screens off, not on—that I suggest people ought to watch, and without reservation. It's that good.
With Mr. Robot till the end
The TV show Mr. Robot ends its four-season run in six days.* It began four and a half years ago, in the summer of 2015. It didn't exactly begin with a bang, but the whimper of its premiere (if I may mix metaphors) snowballed into one by its first season finale. Its seemingly omnipresent, omniscient mastermind of a creator Sam Esmail appeared to be the Next Big Thing in TV: a child of 90s cinema, he was and is Fincher and Spielberg and Soderbergh and PTA and Tarantino all—at least aspirationally—rolled into one. Eventually writer-director of every single episode—40 in total by series' end—the show is the complete vision of a self-styled auteur if ever there was one.
The second season lost much of the good will and momentum generated by the first. Sprawling, dense, literally and figuratively dark, trapped for much of its time in Elliot's mind: both critics and viewers in notable numbers dropped the show then, or so it appears from online commentary and anecdotal conversations. And although season 2 contained high points, especially in the back half of episodes, much of the criticism was justified.
I'm here to tell you, though, that not only are seasons 3 and 4 worth your time. Not only do they contain some of the most creative, entertaining, formally innovative storytelling in the medium. They might make for the two best seasons of TV in the last 3-5 years, at least for hour-long dramas.
The only comparable series in terms of back-to-back seasons during the same time frame would be Atlanta (though not an hour-long drama), Succession, The Leftovers, Better Call Saul, The Expanse, and The Americans (though its strongest run was seasons 3-4 in 2015–2016). Pound for pound, Mr. Robot is a peer to those heavyweights, and it might actually be the champ.
I realized this in the last six weeks when, on Sunday or Monday evenings, I had the choice to catch up on any number of quality shows: Watchmen, Silicon Valley, The Good Place, The Mandalorian, Jack Ryan (okay that one turned out not to be so good). And I kept finding myself, against what I felt to be better (or rather, public critical) judgment, opting for Mr. Robot: both to-breathlessly-see-what-happens-next, and for the sheer pleasure, the emotional thrill, of experiencing one more hour in Sam Esmail's world. Not one episode has let me down.
Season 1 was all world- and character-building, along with establishing the style of the series. Season 2 went inward, at times too deeply or wildly, but without ever quite losing sight of the goods or goals of the story and its characters.
But seasons 3 and 4 have been masterful as exercises in pace, plotting, tone, tension, and two different balancing acts: narrative and character arcs, on the one hand, and form and substance, on the other. I find myself, against all odds, caring about Elliot and Darlene and Mr. Robot and Angela and the rest. And the virtuosic experimentation has been exquisitely married, at each juncture, to the nature of the action and the purpose of the narrative and where we find the characters therein: whether that involve a silent episode featuring a heist and a host of isolated characters texting one another, or a multi-act stage drama bottle episode, or an "uncut" single-shot thriller—or whatever. It is beyond thrilling. It's mesmerizing. I find myself drawing closer to the screen, so captivated I'm standing, the music of Mac Quayle blasting as the odd angles of the camera dislocate the persons on screen relative to one another.
However Esmail concludes the series this Sunday,* he'll be ending the way he always envisioned: not with the external action, but with the internal drama of Elliot's soul. That's as it should be. He's set us up for more than one big surprise. But the revelations won't make or break the series. He's already made good on his promise. Mr. Robot is the real thing, and I'm with it—with him—till the end.
*Update: Unbeknownst to me, the fourth season of the show does not contain 10 episodes (like the previous three), but 13. I thought I checked this back in September, but perhaps I just assumed. Episode 9 only confirmed the assumption in its "feel" as a penultimate episode. Well then! Episode 10 certainly proved that assumption wrong. The good news: three more hours of what is still a great show.
The second season lost much of the good will and momentum generated by the first. Sprawling, dense, literally and figuratively dark, trapped for much of its time in Elliot's mind: both critics and viewers in notable numbers dropped the show then, or so it appears from online commentary and anecdotal conversations. And although season 2 contained high points, especially in the back half of episodes, much of the criticism was justified.
I'm here to tell you, though, that not only are seasons 3 and 4 worth your time. Not only do they contain some of the most creative, entertaining, formally innovative storytelling in the medium. They might make for the two best seasons of TV in the last 3-5 years, at least for hour-long dramas.
The only comparable series in terms of back-to-back seasons during the same time frame would be Atlanta (though not an hour-long drama), Succession, The Leftovers, Better Call Saul, The Expanse, and The Americans (though its strongest run was seasons 3-4 in 2015–2016). Pound for pound, Mr. Robot is a peer to those heavyweights, and it might actually be the champ.
I realized this in the last six weeks when, on Sunday or Monday evenings, I had the choice to catch up on any number of quality shows: Watchmen, Silicon Valley, The Good Place, The Mandalorian, Jack Ryan (okay that one turned out not to be so good). And I kept finding myself, against what I felt to be better (or rather, public critical) judgment, opting for Mr. Robot: both to-breathlessly-see-what-happens-next, and for the sheer pleasure, the emotional thrill, of experiencing one more hour in Sam Esmail's world. Not one episode has let me down.
Season 1 was all world- and character-building, along with establishing the style of the series. Season 2 went inward, at times too deeply or wildly, but without ever quite losing sight of the goods or goals of the story and its characters.
But seasons 3 and 4 have been masterful as exercises in pace, plotting, tone, tension, and two different balancing acts: narrative and character arcs, on the one hand, and form and substance, on the other. I find myself, against all odds, caring about Elliot and Darlene and Mr. Robot and Angela and the rest. And the virtuosic experimentation has been exquisitely married, at each juncture, to the nature of the action and the purpose of the narrative and where we find the characters therein: whether that involve a silent episode featuring a heist and a host of isolated characters texting one another, or a multi-act stage drama bottle episode, or an "uncut" single-shot thriller—or whatever. It is beyond thrilling. It's mesmerizing. I find myself drawing closer to the screen, so captivated I'm standing, the music of Mac Quayle blasting as the odd angles of the camera dislocate the persons on screen relative to one another.
However Esmail concludes the series this Sunday,* he'll be ending the way he always envisioned: not with the external action, but with the internal drama of Elliot's soul. That's as it should be. He's set us up for more than one big surprise. But the revelations won't make or break the series. He's already made good on his promise. Mr. Robot is the real thing, and I'm with it—with him—till the end.
*Update: Unbeknownst to me, the fourth season of the show does not contain 10 episodes (like the previous three), but 13. I thought I checked this back in September, but perhaps I just assumed. Episode 9 only confirmed the assumption in its "feel" as a penultimate episode. Well then! Episode 10 certainly proved that assumption wrong. The good news: three more hours of what is still a great show.
A Twitter amendment
Last weekend I was in San Diego for the annual meeting of AAR/SBL, and (as has become my custom) I mostly saw old friends and new acquaintances. Most of the latter I have "met" online; most of those "meetings" were on Twitter.
Tomorrow marks 8 weeks since I began my experiment with decreasing my Twitter usage: zero time on that infernal website Sunday through Friday, and 30 minutes or fewer on Saturday; moreover, no active tweeting (original, RT, replies, etc.) on any day of the week: only occasional links to something I've written.
After San Diego, I'm reconsidering my experiment, or rather, considering an amendment to it. I think I'm going to try a modest "return" to being an active rather than passive user on Twitter, albeit within the same time and use constraints I've already set for myself. That is: limit both reading and tweeting to Saturdays, for 30 minutes or so, but become a sort of power-user for that half-hour of time: sharing thoughts, interacting with others, retweeting, threading ideas, following new accounts, replying and connecting, etc., etc.
It's another experiment, and if the various negative consequences of using Twitter that caused my first self-imposed exile return in any way, I'll drop it ASAP. Twitter brain, group think, the itchy need to check replies, inability to focus reading, long dark rabbit holes that bruise the soul: none of that, thank you very much.
But seeing old friends and new (and in the flesh, at that!), so many of whom I've met through that otherwise detestable website, persuaded me that there might be additional such benefits on the horizon. Given Twitter's systemic effects, I continue to believe that it ought to be burned to the ground. But perhaps I can squeeze a few more drops of good out of it before it (Lord willing) does so.
Tomorrow marks 8 weeks since I began my experiment with decreasing my Twitter usage: zero time on that infernal website Sunday through Friday, and 30 minutes or fewer on Saturday; moreover, no active tweeting (original, RT, replies, etc.) on any day of the week: only occasional links to something I've written.
After San Diego, I'm reconsidering my experiment, or rather, considering an amendment to it. I think I'm going to try a modest "return" to being an active rather than passive user on Twitter, albeit within the same time and use constraints I've already set for myself. That is: limit both reading and tweeting to Saturdays, for 30 minutes or so, but become a sort of power-user for that half-hour of time: sharing thoughts, interacting with others, retweeting, threading ideas, following new accounts, replying and connecting, etc., etc.
It's another experiment, and if the various negative consequences of using Twitter that caused my first self-imposed exile return in any way, I'll drop it ASAP. Twitter brain, group think, the itchy need to check replies, inability to focus reading, long dark rabbit holes that bruise the soul: none of that, thank you very much.
But seeing old friends and new (and in the flesh, at that!), so many of whom I've met through that otherwise detestable website, persuaded me that there might be additional such benefits on the horizon. Given Twitter's systemic effects, I continue to believe that it ought to be burned to the ground. But perhaps I can squeeze a few more drops of good out of it before it (Lord willing) does so.
Luddites and climate activists, unite!
I encourage you to read Ben Tarnoff's piece in The Guardian from a couple months back: "To decarbonize we must decomputerize: why we need a Luddite revolution." The very worst approach to technology is fatalism: it's inevitable; it's the future; we just have to accept it. The second worst approach is denialism: it's not so bad, since (obviously and necessarily) nothing so central to our lives could as bad as the naysayers suggest. The third worst approach is a failure to make connections. This last characteristic is one oddly ubiquitous among liberal folks I talk to about this issue. If either free-market liberalism or the digitization of our lives is so good, then why are the effects so bad for the environment? And what brakes stand in the way of further ecological harm? Denial underwriting technological fatalism certainly won't do the trick.
Perhaps climate activists are allies in waiting for Luddites, and vice versa. As Tarnoff observes, both perceive the costs of technology in the present tense. In a time obsessed with either moment-to-moment minutiae that don't matter or a utopian future that doesn't exist, the present problems bearing down on us, in the form of both climate change and technological takeover, seem like the right place to begin.
Perhaps climate activists are allies in waiting for Luddites, and vice versa. As Tarnoff observes, both perceive the costs of technology in the present tense. In a time obsessed with either moment-to-moment minutiae that don't matter or a utopian future that doesn't exist, the present problems bearing down on us, in the form of both climate change and technological takeover, seem like the right place to begin.
MCU Phases 4 & 5: dream or nightmare?
I have a mixed relationship to the Marvel movies that have so dominated the last decade of Hollywood. On the one hand, I readily enjoy them. I think, for the most part, that they are well made blockbusters, occasionally quite good, directed competently, written with care, and acted superbly. Their achievement as TV-like serialization across 23 films (and three "phases") is, as Matt Zoller Seitz has written, without precedent and accordingly impressive.
On the other hand, I'm neither a comic books "fan" nor an apologist for the MCU. I've read all of two graphic novels in my life, and have nothing invested in "geek culture." I furthermore share the general sentiment that the Marvel-fication of cinema as such is an unhealthy trend. It isn't good that there's a new superhero movie out every three weeks, and that Hollywood wants any and all blockbuster filmmaking to be (a) built on preexisting IP and (b) part of a larger "cinematic universe."
At the same time, I think it's too easy to use Feige and the MCU as a scapegoat. Marvel's (and Disney's) success did not and does not necessitate a systemic change in Hollywood, or the monotonous assembly line of genre fare we've seen in its wake. Moreover, while critics like Scorsese are certainly right to be exhausted by the last decade, two factors militate against an overreaction. First, great films continue to be made and recognized. Second, "cinema" includes more than the art house: indeed, cinema intrinsically includes the spectacle of sheer, broadly appealing fun. Scorsese and his cohort of directors know that more than anyone.
Having said all that, with Feige and Marvel taking a victory lap right now, it is a fascinating and revealing thing to look into Disney's plans for the MCU going into the next 3-4 years. For denizens of the art house, it is indeed a nightmare of sorts. For geeks, doubtless it is a dream. But like all dreams, it's going to come to an end. Indeed, in looking at the lineup below, it's hard to believe it's real.
So far as I have been able to put together, what follows is the forthcoming schedule of theatrical films and television shows (exclusively on Disney+) on the slate for 2020 through 2023 or so. Beginning with 2022 I'm taking educated guesses on timing. Movies are bolded and shows have a (+) after them. Read it and (alternately) weep or rejoice.
2020
2021
2022
2023
Feb – Captain Marvel 2
On the other hand, I'm neither a comic books "fan" nor an apologist for the MCU. I've read all of two graphic novels in my life, and have nothing invested in "geek culture." I furthermore share the general sentiment that the Marvel-fication of cinema as such is an unhealthy trend. It isn't good that there's a new superhero movie out every three weeks, and that Hollywood wants any and all blockbuster filmmaking to be (a) built on preexisting IP and (b) part of a larger "cinematic universe."
At the same time, I think it's too easy to use Feige and the MCU as a scapegoat. Marvel's (and Disney's) success did not and does not necessitate a systemic change in Hollywood, or the monotonous assembly line of genre fare we've seen in its wake. Moreover, while critics like Scorsese are certainly right to be exhausted by the last decade, two factors militate against an overreaction. First, great films continue to be made and recognized. Second, "cinema" includes more than the art house: indeed, cinema intrinsically includes the spectacle of sheer, broadly appealing fun. Scorsese and his cohort of directors know that more than anyone.
Having said all that, with Feige and Marvel taking a victory lap right now, it is a fascinating and revealing thing to look into Disney's plans for the MCU going into the next 3-4 years. For denizens of the art house, it is indeed a nightmare of sorts. For geeks, doubtless it is a dream. But like all dreams, it's going to come to an end. Indeed, in looking at the lineup below, it's hard to believe it's real.
So far as I have been able to put together, what follows is the forthcoming schedule of theatrical films and television shows (exclusively on Disney+) on the slate for 2020 through 2023 or so. Beginning with 2022 I'm taking educated guesses on timing. Movies are bolded and shows have a (+) after them. Read it and (alternately) weep or rejoice.
2020
May – Black Widow
Fall – The Falcon and the Winter Soldier (+)
Nov – The Eternals
2021
Feb – Shang-Chi and the Legend of the Ten Rings
Spring – Loki (+)
Spring – WandaVision (+)
May – Doctor Strange in the Multiverse of Madness
Summer – What if...? (+ animated)
July – Spider-Man 3
Fall – Hawkeye (+)
Nov – Thor 4: Love and Thunder
2022
Feb –Deadpool 3
Spring – Moon Knight (+)
May – Black Panther 2 [update: confirmed]
May – Black Panther 2 [update: confirmed]
July – Ant-Man 3
Fall – Ms. Marvel (+)
Oct – Blade [February seems a better bet, but since they specified October, that suggests "scary"]
Oct – Blade [February seems a better bet, but since they specified October, that suggests "scary"]
2023
Feb – Captain Marvel 2
Spring – She-Hulk (+)
May – Guardians of the Galaxy 3
May – Guardians of the Galaxy 3
July – Fantastic Four reboot
Nov – X-Men reboot [perhaps FF or X-Men are introduced back-door via a summer or fall Avengers 5, a la Black Panther or Spider-Man in Civil War—that seems wisest]
–It's worth noting that not included here are even further sequels: Shang-Chi 2, Doctor Stranger 3, Spider-Man 4, Black Panther 3, The Eternals 2, Captain Marvel 3, and so on. It also assumes some sort of big team-up. [Updated question: Will there be another "Avengers" movie, properly speaking? Or will team-ups just happen organically within other characters' movies?] And there are definitely even more properties and characters to be introduced not mentioned here.
–It seems clear that, although 2020 will revert to 2 films in the year—a sort of deep breath after Endgame and before the onset of Phase 4—beginning the following year, it's going to be four MCU films per year going forward. And that, as they say, will test the market's limits.
–It's worth noting that not included here are even further sequels: Shang-Chi 2, Doctor Stranger 3, Spider-Man 4, Black Panther 3, The Eternals 2, Captain Marvel 3, and so on. It also assumes some sort of big team-up. [Updated question: Will there be another "Avengers" movie, properly speaking? Or will team-ups just happen organically within other characters' movies?] And there are definitely even more properties and characters to be introduced not mentioned here.
–It seems clear that, although 2020 will revert to 2 films in the year—a sort of deep breath after Endgame and before the onset of Phase 4—beginning the following year, it's going to be four MCU films per year going forward. And that, as they say, will test the market's limits.
Experiments in Luddite pedagogy: dropping the LMS
This semester I wanted to experiment with teaching my courses without the use of an LMS. For those unfamiliar with the term, LMS stands for "learning management system," i.e., an online program for turning in assignments, communicating with students, updating the syllabus, inputting grades, etc. Some of us used Blackboard back in the day. My campus uses Canvas.
Now, Canvas is without question the best LMS I have ever encountered: intuitive, adaptable, not prone to random glitches and failures, useful for any number of pedagogical and technological ideas and goals. So far as I can see, after 15 years or so, the technology has finally caught on to the vision of using the internet well for teaching purposes, a vision ahead of its time one to two decades ago, and which probably, as a result, led to a lot of wasted time and self-defeating habits.
But, you might be wondering, if Canvas is a good LMS, why did I want to experiment with not using one? Here's why.
1. I want to be intentional in my use of technology, in my life and in the classroom. My campus is an LMS-supporting and Canvas-deploying atmosphere. I've never heard of anyone even broaching the topic of not using it, or even of using it as minimally as possible. The presumption is not that one ought to decide whether to use an LMS but how. I wanted to test that presumption.
2. I have many colleagues who are not only tech-savvy but pedagogically creative, even brilliant, in the ways they put Canvas to use in their courses and in their classrooms. I am not among them. Partly for philosophical reasons, partly for pragmatic reasons, I simply do not put Canvas to maximal use. Indeed, in most of my courses I put it to minimal use: sometimes exclusively as an online home for the syllabus and for students' grades. Which raises the practical question: If that's all I'm using it for, why use it at all?
3. My diagnosis of students—a diagnosis I share with them, since the diagnosis applies to our culture more broadly, and since some of my classes try to tackle the problem head-on—is that they are overly reliant upon, even addicted to, screens: above all their smartphones. Part of my move toward (so-called) Luddite pedagogy is that I don't want to contribute to that addiction if (a) there are alternatives and/or (b) that contribution would not justify the additional screen time it would require. In other words, if I'm going to ask, encourage, or (Lord help me) mandate that my students be on their devices more than they already are, then I had better have a very good reason for it. Do I? Do we?
4. The three greatest "needs" addressed by an LMS are communication, syllabus, and grades. All other uses, so far as I can see, are optional: each professor is (or should be) free to employ it—or not—to whatever further pedagogical ends she has for her course. But those differ per the nature of the class, the character of the instructor, the style of assignments, and so on. What then of those necessities?
5. Communication is most simply dealt with: I communicate with my students face to face, in class, or via email. Communication via LMS has only ever seemed to me like one more thing to add to all the other modes of digital communication in one's life (text messages, Google Chat, Slack, Facebook messaging, Twitter DMs, Instagram DMs, on and on). I only know if someone has messaged me on Canvas if Canvas alerts me by email. Why not just cut out the middle man?
6. I understand the desire for an online syllabus. I prefer not to have one, but for those who do, I would go one of two ways. Either Google Doc—which is simple, accessible, and revisable—or a one-page blog post, preferably on one's own domain (examples: Alan Jacobs; Jeffrey Bilbro). Moreover, I discovered that my students were (a) ignoring my verbal instructions about assignments and scheduling in favor of what Canvas told them and (b) ignoring the syllabus PDF on Canvas in favor of what Canvas's schedule of upcoming assignments told them. It turns out that form isn't content-neutral and the medium is the message (ever and ever, amen): students have been trained by LMS programs since middle and high school not to read or even to listen and instead to consult their online home page for course guidance. If the home page says jump, they jump. If it says nothing, they know there's nothing to do—even if the professor or the written syllabus says otherwise. So this semester one of my experiments was the lack of a Canvas home page "consultation" device: they had to read the paper syllabus I handed out to them as well as pay attention to what I said in class. A novel concept, no?
7. Immediate and ever-ready access to grades is both the greatest expressed desire on the part of students and that which has caused me the most worry about presumptive usage of LMS in higher education. Students and professors talk as if the absence of such access is a cause for anxiety in an already anxiety-ridden generation. My observation has been the opposite. From what I can tell, immediate and ever-ready access to grades does not alleviate but rather generates and increases student anxiety. Students' default settings on Canvas—which they have not only on their tablets and laptops but, naturally, on their smartphones—sign them up for email alerts and push notifications for any and all changes to the grade sheet, including changes to other students' grades. For though they can't see others' grades, they can see the average grade for an assignment, which changes as others' grades are entered or modified. Now, students, like the rest of us, are already addicted to their phones. Add to that the ever-present possibility that grades might be entered, or start being entered. Add to that receiving push notification after push notification updating the average grade for a course assignment, prior to receiving one's own grade. It's a recipe for stress. And even if, on the instructor side, you do your part to minimize all those alerts, students can still go online and check their grades at any moment, calculating their (incomplete and rarely predictive) average and comparing their individual grades to how their peers did (on average). I am flat unpersuaded that this is a good thing.
So I opted for my little experiment (with support from my chair). What have I done, and how has it gone so far?
1. Each student receives a printed copy of the syllabus the first week of class along with detailed verbal commentary by me. I also email a PDF to everyone in the class. I'm not a "revise as we go" teacher, so any changes are minor (e.g., no class on X day because my kid is sick, etc.).
2. I communicate in class or via email (or one-on-one in office hours)—full stop.
3. All assignments are completed or submitted by hand or in person: quizzes are taken in class without the use of laptops or phones; papers are printed out and turned in during class; there are no online class discussions; etc. Reasonable exceptions are permitted due to ability, availability, emergencies, and so on, but these are the norms.
4. One of my courses uses a bunch of scanned PDFs of chapters and essays. I simply uploaded all of them into a university Google Drive and shared it with the students in the course.
5. As for grades: This proved the biggest experiment of all, though it's merely a throwback to the way professors did things for decades before the advent of LMS. I keep a spreadsheet for each class where I input grades, absences, etc. The students' names are in a random order, and each student has a (privately assigned) number. At the end of each week, I print out the spreadsheet, minus their names, and post it outside my office. (This is the FERPA-approved method.) Students know their grades are updated weekly, and can come by anytime. They received their confidential identifying number by individual email early in the semester, and their grades remain anonymous that way. For my smaller courses (seminar-like in numbers), I bring the spreadsheet to class when I return major assignments like papers.
6. Why this route for grades? First, to undercut the anxiety of alerts and notifications. Second, to remove one more digital temptation for perpetual checking and refreshing: "I wonder if he'll update them online now? I wonder if he already has? I'll go ahead and check." Third, to motivate me to grade in a timely fashion. Fourth, to encourage students to come by my office and, if they have questions about grades, to ask me questions then and there rather than via email the moment grades are posted online. Fifth, to routinize the giving and posting of grading so that it's not a pall hovering over my head at all times, but has a structure and rhythm within the work week.
7. So: How have students responded? Without a single complaint. Not one problem. Now, we're in week 10 of 15. Perhaps there will be some students who organize mass protests at the end of the semester for one reason or another. I solicit anonymous feedback mid-semester, and that is where I got the idea to bring the grade sheet to my smaller classes when papers are handed back. But otherwise it's been smooth sailing on the student side: no missed assignments, no botched communication, no "but Canvas said!" I'm honestly still a bit shocked that there haven't been a few more complaints or requests for online grades: I told them up front that it was an experiment, and that I was open to revision or reversion if it didn't go well. But I've seen no resistance on their side whatsoever.
8. And on my side, it's been one long victory march. I've deleted my Facebook account, I've reduced Twitter to ~30 minutes on Saturdays, I severely limit my time on email, and now I'm not spending hours on Canvas when I could be doing something more productive with my time. Again, the point isn't that any and all LMS usage is evil or time poorly spent; it's that such usage ought to be intentional and purposeful. For me, it had become one more digital box to check, not a positive contributor to my pedagogical goals or my students' well-being. I have colleagues who use it well and I have other classes in which I too use it (hopefully well enough). But as for this semester, the net benefits have been manifold. Less time on the laptop, less time online, more time for other work tasks, and more timely and efficient grading. Win, win, win, win.
9. The question now is next semester. This semester I have all upperclassmen in elective courses. Next semester I'm teaching a one-week intensive in January, another elective for upperclassmen, but also a freshman survey class that is lecture-based. The intensive course relies on Canvas both before and after we meet, so I will probably keep it (though I suppose I could drop it if I solved the problem of how to give them their grades apart from the LMS grade sheet). But I'm disinclined to nix the LMS for the freshmen, for two reasons. First, they're coming from high schools where they relied on an LMS, especially for grades, and at this stage the lack of one might freak them out. Second, I have productively used online discussion posts for an assignment in this particular course, and unless I think of an alternative, I'm loath to drop it. But since I'm new to this experiment, and since it has gone so well (even better than I imagined, if I'm honest), I might keep trying to think creatively about what it would mean to go LMS-free across all my classes.
We shall see. More reports to come from my haphazard attempts Luddite pedagogy. Until then.
Now, Canvas is without question the best LMS I have ever encountered: intuitive, adaptable, not prone to random glitches and failures, useful for any number of pedagogical and technological ideas and goals. So far as I can see, after 15 years or so, the technology has finally caught on to the vision of using the internet well for teaching purposes, a vision ahead of its time one to two decades ago, and which probably, as a result, led to a lot of wasted time and self-defeating habits.
But, you might be wondering, if Canvas is a good LMS, why did I want to experiment with not using one? Here's why.
1. I want to be intentional in my use of technology, in my life and in the classroom. My campus is an LMS-supporting and Canvas-deploying atmosphere. I've never heard of anyone even broaching the topic of not using it, or even of using it as minimally as possible. The presumption is not that one ought to decide whether to use an LMS but how. I wanted to test that presumption.
2. I have many colleagues who are not only tech-savvy but pedagogically creative, even brilliant, in the ways they put Canvas to use in their courses and in their classrooms. I am not among them. Partly for philosophical reasons, partly for pragmatic reasons, I simply do not put Canvas to maximal use. Indeed, in most of my courses I put it to minimal use: sometimes exclusively as an online home for the syllabus and for students' grades. Which raises the practical question: If that's all I'm using it for, why use it at all?
3. My diagnosis of students—a diagnosis I share with them, since the diagnosis applies to our culture more broadly, and since some of my classes try to tackle the problem head-on—is that they are overly reliant upon, even addicted to, screens: above all their smartphones. Part of my move toward (so-called) Luddite pedagogy is that I don't want to contribute to that addiction if (a) there are alternatives and/or (b) that contribution would not justify the additional screen time it would require. In other words, if I'm going to ask, encourage, or (Lord help me) mandate that my students be on their devices more than they already are, then I had better have a very good reason for it. Do I? Do we?
4. The three greatest "needs" addressed by an LMS are communication, syllabus, and grades. All other uses, so far as I can see, are optional: each professor is (or should be) free to employ it—or not—to whatever further pedagogical ends she has for her course. But those differ per the nature of the class, the character of the instructor, the style of assignments, and so on. What then of those necessities?
5. Communication is most simply dealt with: I communicate with my students face to face, in class, or via email. Communication via LMS has only ever seemed to me like one more thing to add to all the other modes of digital communication in one's life (text messages, Google Chat, Slack, Facebook messaging, Twitter DMs, Instagram DMs, on and on). I only know if someone has messaged me on Canvas if Canvas alerts me by email. Why not just cut out the middle man?
6. I understand the desire for an online syllabus. I prefer not to have one, but for those who do, I would go one of two ways. Either Google Doc—which is simple, accessible, and revisable—or a one-page blog post, preferably on one's own domain (examples: Alan Jacobs; Jeffrey Bilbro). Moreover, I discovered that my students were (a) ignoring my verbal instructions about assignments and scheduling in favor of what Canvas told them and (b) ignoring the syllabus PDF on Canvas in favor of what Canvas's schedule of upcoming assignments told them. It turns out that form isn't content-neutral and the medium is the message (ever and ever, amen): students have been trained by LMS programs since middle and high school not to read or even to listen and instead to consult their online home page for course guidance. If the home page says jump, they jump. If it says nothing, they know there's nothing to do—even if the professor or the written syllabus says otherwise. So this semester one of my experiments was the lack of a Canvas home page "consultation" device: they had to read the paper syllabus I handed out to them as well as pay attention to what I said in class. A novel concept, no?
7. Immediate and ever-ready access to grades is both the greatest expressed desire on the part of students and that which has caused me the most worry about presumptive usage of LMS in higher education. Students and professors talk as if the absence of such access is a cause for anxiety in an already anxiety-ridden generation. My observation has been the opposite. From what I can tell, immediate and ever-ready access to grades does not alleviate but rather generates and increases student anxiety. Students' default settings on Canvas—which they have not only on their tablets and laptops but, naturally, on their smartphones—sign them up for email alerts and push notifications for any and all changes to the grade sheet, including changes to other students' grades. For though they can't see others' grades, they can see the average grade for an assignment, which changes as others' grades are entered or modified. Now, students, like the rest of us, are already addicted to their phones. Add to that the ever-present possibility that grades might be entered, or start being entered. Add to that receiving push notification after push notification updating the average grade for a course assignment, prior to receiving one's own grade. It's a recipe for stress. And even if, on the instructor side, you do your part to minimize all those alerts, students can still go online and check their grades at any moment, calculating their (incomplete and rarely predictive) average and comparing their individual grades to how their peers did (on average). I am flat unpersuaded that this is a good thing.
So I opted for my little experiment (with support from my chair). What have I done, and how has it gone so far?
1. Each student receives a printed copy of the syllabus the first week of class along with detailed verbal commentary by me. I also email a PDF to everyone in the class. I'm not a "revise as we go" teacher, so any changes are minor (e.g., no class on X day because my kid is sick, etc.).
2. I communicate in class or via email (or one-on-one in office hours)—full stop.
3. All assignments are completed or submitted by hand or in person: quizzes are taken in class without the use of laptops or phones; papers are printed out and turned in during class; there are no online class discussions; etc. Reasonable exceptions are permitted due to ability, availability, emergencies, and so on, but these are the norms.
4. One of my courses uses a bunch of scanned PDFs of chapters and essays. I simply uploaded all of them into a university Google Drive and shared it with the students in the course.
5. As for grades: This proved the biggest experiment of all, though it's merely a throwback to the way professors did things for decades before the advent of LMS. I keep a spreadsheet for each class where I input grades, absences, etc. The students' names are in a random order, and each student has a (privately assigned) number. At the end of each week, I print out the spreadsheet, minus their names, and post it outside my office. (This is the FERPA-approved method.) Students know their grades are updated weekly, and can come by anytime. They received their confidential identifying number by individual email early in the semester, and their grades remain anonymous that way. For my smaller courses (seminar-like in numbers), I bring the spreadsheet to class when I return major assignments like papers.
6. Why this route for grades? First, to undercut the anxiety of alerts and notifications. Second, to remove one more digital temptation for perpetual checking and refreshing: "I wonder if he'll update them online now? I wonder if he already has? I'll go ahead and check." Third, to motivate me to grade in a timely fashion. Fourth, to encourage students to come by my office and, if they have questions about grades, to ask me questions then and there rather than via email the moment grades are posted online. Fifth, to routinize the giving and posting of grading so that it's not a pall hovering over my head at all times, but has a structure and rhythm within the work week.
7. So: How have students responded? Without a single complaint. Not one problem. Now, we're in week 10 of 15. Perhaps there will be some students who organize mass protests at the end of the semester for one reason or another. I solicit anonymous feedback mid-semester, and that is where I got the idea to bring the grade sheet to my smaller classes when papers are handed back. But otherwise it's been smooth sailing on the student side: no missed assignments, no botched communication, no "but Canvas said!" I'm honestly still a bit shocked that there haven't been a few more complaints or requests for online grades: I told them up front that it was an experiment, and that I was open to revision or reversion if it didn't go well. But I've seen no resistance on their side whatsoever.
8. And on my side, it's been one long victory march. I've deleted my Facebook account, I've reduced Twitter to ~30 minutes on Saturdays, I severely limit my time on email, and now I'm not spending hours on Canvas when I could be doing something more productive with my time. Again, the point isn't that any and all LMS usage is evil or time poorly spent; it's that such usage ought to be intentional and purposeful. For me, it had become one more digital box to check, not a positive contributor to my pedagogical goals or my students' well-being. I have colleagues who use it well and I have other classes in which I too use it (hopefully well enough). But as for this semester, the net benefits have been manifold. Less time on the laptop, less time online, more time for other work tasks, and more timely and efficient grading. Win, win, win, win.
9. The question now is next semester. This semester I have all upperclassmen in elective courses. Next semester I'm teaching a one-week intensive in January, another elective for upperclassmen, but also a freshman survey class that is lecture-based. The intensive course relies on Canvas both before and after we meet, so I will probably keep it (though I suppose I could drop it if I solved the problem of how to give them their grades apart from the LMS grade sheet). But I'm disinclined to nix the LMS for the freshmen, for two reasons. First, they're coming from high schools where they relied on an LMS, especially for grades, and at this stage the lack of one might freak them out. Second, I have productively used online discussion posts for an assignment in this particular course, and unless I think of an alternative, I'm loath to drop it. But since I'm new to this experiment, and since it has gone so well (even better than I imagined, if I'm honest), I might keep trying to think creatively about what it would mean to go LMS-free across all my classes.
We shall see. More reports to come from my haphazard attempts Luddite pedagogy. Until then.
The question for Silicon Valley
A single question has lingered over Mike Judge's Silicon Valley from the beginning. That question is whether he and his writing team—call them "the show"—believe that Richard and his unlikely crew of can-do programming losers not only can but ought to "win," and that such a win could be genuinely transformative and good for the world, or whether the system and culture of Silicon Valley are so fundamentally corrupted that even to win is to lose.
This dynamic has made the show worth watching till the end, but frustrating at times as well. It's not just whether Richard or his friends might "break bad," which the show entertained for a while. It's whether we, the audience, ought to cheer on Richard when he triumphs over the big bads of Google and Twitter and Facebook (or their stand-in "Hooli"), with his dream of a "free internet," or whether we ought to see through the self-serving rhetoric that attends every such dream.
Judge et al have given enough hints and plot turns to suggest that they know where they're headed and that the destination will be just as dark and pessimistic as the show's heart has proven to be throughout. But going on sixty hours of watching Richard blunder his way, with occasional eloquence, through one obstacle after another also suggests that the show might fall victim to the trap all TV writers fall into: coming to love their characters too much to let them lose. How can you give viewers lack of resolution, a sad and humiliating conclusion to spending time with "their friends" for so long?
I hope Judge sticks to his guns and (to mix metaphors) sticks the landing. Silicon Valley needs an ending fitting to Silicon Valley IRL. That ending should be pitch black. There is no saving it. To win is to lose. The internet is doomed: no algorithm or digital wizardry can redeem it.
Richard represents all those tech gurus who came before, a true believer before the windfall comes. Let the curtain descend on this once-idealistic CEO, awash in fame and money, sitting on the throne of an empire that continues to tyrannize us all. Whether he is smiling or weeping, that's the only honest end to this tech-start-up story, because the story is the same for all of them. Begin with hope, run the race, keep the faith—and finish with despair.
This dynamic has made the show worth watching till the end, but frustrating at times as well. It's not just whether Richard or his friends might "break bad," which the show entertained for a while. It's whether we, the audience, ought to cheer on Richard when he triumphs over the big bads of Google and Twitter and Facebook (or their stand-in "Hooli"), with his dream of a "free internet," or whether we ought to see through the self-serving rhetoric that attends every such dream.
Judge et al have given enough hints and plot turns to suggest that they know where they're headed and that the destination will be just as dark and pessimistic as the show's heart has proven to be throughout. But going on sixty hours of watching Richard blunder his way, with occasional eloquence, through one obstacle after another also suggests that the show might fall victim to the trap all TV writers fall into: coming to love their characters too much to let them lose. How can you give viewers lack of resolution, a sad and humiliating conclusion to spending time with "their friends" for so long?
I hope Judge sticks to his guns and (to mix metaphors) sticks the landing. Silicon Valley needs an ending fitting to Silicon Valley IRL. That ending should be pitch black. There is no saving it. To win is to lose. The internet is doomed: no algorithm or digital wizardry can redeem it.
Richard represents all those tech gurus who came before, a true believer before the windfall comes. Let the curtain descend on this once-idealistic CEO, awash in fame and money, sitting on the throne of an empire that continues to tyrannize us all. Whether he is smiling or weeping, that's the only honest end to this tech-start-up story, because the story is the same for all of them. Begin with hope, run the race, keep the faith—and finish with despair.
On blissful ignorance of Twitter trends, controversies, beefs, and general goings-on
Being off Twitter continues to be good for my soul as well as my mind, and one of the benefits I'm realizing is the ignorance that comes as a byproduct. By which I mean, ignorance not in general or of good things but of that which it is not beneficial to know.
When you're on Twitter, you notice what is "trending." This micro-targeted algorithmic function shapes your experience of the website, the news, the culture, and the world. Even if it were simply a reflection of what people were tweeting about the most, it would still be random, passing, and mass-generated. Who cares what is trending at any one moment?
More important, based on the accounts one follows, there is always some tempest in a teacup brewing somewhere or other. A controversy, an argument, a flame war, a personal beef: whatever its nature, the brouhaha exerts a kind of gravitational pull, sucking us poor online plebs into its orbit. And because Twitter is the id unvarnished, the kerfuffle in question is usually nasty, brutish, and unedifying. Worst of all, this tiny little momentary conflict warps one's mind, as if anyone cares except the smallest of online sub-sub-sub-sub-sub-sub-sub-"communities." For writers, journalists and academics above all, these Twitter battles start to take up residence in the skull, as if they were not only real but vital and important. Articles and essays are written about them; sometimes they are deployed (with earnest soberness) as a synecdoche for cultural skirmishes to which they bear only the most tangential, and certainly no causal, relationship.
As it turns out, when you are ignorant of such things, they cease in any way to weigh down one's mind, because they might as well not have happened. (If a tweet is dunked on but no one sees it, did the dunking really occur?) And this is all to the good, because 99.9% of the time, what happens on Twitter (a) stays on Twitter and (b) has no consequences—at least for us ordinary folks—in the real world. Naturally, I'm excluding e.g. tweets by the President or e.g. tweets that will get one fired. (Though those examples are just more reasons not to be on Twitter: I suppose if all such reasons were written down even the whole world would not have room for the books that would be written.) What I mean is: The kind of seemingly intellectually interesting tweet-threads and Twitter-arguments are almost never (possibly never-never) worth attending to in the moment.
Why? First, because they're usually stillborn: best not to have read them in the first place; there is always a better use of one's time. Second, because, although they feel like they are setting the terms of this or that debate, they are typically divorced from said debate, or merely symptoms of it, or just reflections of it: but in most cases, not where the real action is happening. Third, because if they're interesting enough—possibly even debate-setting enough—their author will publish them in an article or suchlike that will render redundant the original source of the haphazard thoughts that are now well organized and digestible in an orderly sequence of thought. Fourth and finally, because if a tweet or thread is significant enough (this is the .01% leftover from above), someone will publish about it and make known to the rest of us why it is (or, as the case may be, is actually not) important. In this last case, there is a minor justification for journalists not to delete their Twitter accounts; though the reasons for deletion are still strong, they can justify their use of the evil website (or at least spending time on it: one can read tweets without an account). For the rest of us, we can find out what happened on the hellscape that is Twitter in the same way we get the rest of our news: from reputable, established outlets. And not by what's trending at any one moment.
For writers and academics, the resulting rewards are incomparable. The time-honored and irrefutable wisdom not to read one's mentions—corrupting the mind, as it does, and sabotaging good writing—turns out to have broader application. Don't just avoid reading your mentions. Don't have mentions to read in the first place.
When you're on Twitter, you notice what is "trending." This micro-targeted algorithmic function shapes your experience of the website, the news, the culture, and the world. Even if it were simply a reflection of what people were tweeting about the most, it would still be random, passing, and mass-generated. Who cares what is trending at any one moment?
More important, based on the accounts one follows, there is always some tempest in a teacup brewing somewhere or other. A controversy, an argument, a flame war, a personal beef: whatever its nature, the brouhaha exerts a kind of gravitational pull, sucking us poor online plebs into its orbit. And because Twitter is the id unvarnished, the kerfuffle in question is usually nasty, brutish, and unedifying. Worst of all, this tiny little momentary conflict warps one's mind, as if anyone cares except the smallest of online sub-sub-sub-sub-sub-sub-sub-"communities." For writers, journalists and academics above all, these Twitter battles start to take up residence in the skull, as if they were not only real but vital and important. Articles and essays are written about them; sometimes they are deployed (with earnest soberness) as a synecdoche for cultural skirmishes to which they bear only the most tangential, and certainly no causal, relationship.
As it turns out, when you are ignorant of such things, they cease in any way to weigh down one's mind, because they might as well not have happened. (If a tweet is dunked on but no one sees it, did the dunking really occur?) And this is all to the good, because 99.9% of the time, what happens on Twitter (a) stays on Twitter and (b) has no consequences—at least for us ordinary folks—in the real world. Naturally, I'm excluding e.g. tweets by the President or e.g. tweets that will get one fired. (Though those examples are just more reasons not to be on Twitter: I suppose if all such reasons were written down even the whole world would not have room for the books that would be written.) What I mean is: The kind of seemingly intellectually interesting tweet-threads and Twitter-arguments are almost never (possibly never-never) worth attending to in the moment.
Why? First, because they're usually stillborn: best not to have read them in the first place; there is always a better use of one's time. Second, because, although they feel like they are setting the terms of this or that debate, they are typically divorced from said debate, or merely symptoms of it, or just reflections of it: but in most cases, not where the real action is happening. Third, because if they're interesting enough—possibly even debate-setting enough—their author will publish them in an article or suchlike that will render redundant the original source of the haphazard thoughts that are now well organized and digestible in an orderly sequence of thought. Fourth and finally, because if a tweet or thread is significant enough (this is the .01% leftover from above), someone will publish about it and make known to the rest of us why it is (or, as the case may be, is actually not) important. In this last case, there is a minor justification for journalists not to delete their Twitter accounts; though the reasons for deletion are still strong, they can justify their use of the evil website (or at least spending time on it: one can read tweets without an account). For the rest of us, we can find out what happened on the hellscape that is Twitter in the same way we get the rest of our news: from reputable, established outlets. And not by what's trending at any one moment.
For writers and academics, the resulting rewards are incomparable. The time-honored and irrefutable wisdom not to read one's mentions—corrupting the mind, as it does, and sabotaging good writing—turns out to have broader application. Don't just avoid reading your mentions. Don't have mentions to read in the first place.
A comprehensive list of undefeated teams in the NBA
Denver Nuggets, Minnesota Timberwolves, Philadelphia 76ers, San Antonio Spurs.
Just sayin'.
Just sayin'.
Must theologians be faithful? A question for Volf and Croasmun
In their new book, For the Life of the World: Theology That Makes a Difference (Brazos Press, 2019), Miroslav Volf and Matthew Croasmun make the argument that the lives of theologians matter for the writing and evaluation of theological proposals. Reading through chapter 5, though, where they make the argument, I was left unsure about what exactly they were claiming. Let me offer a sample of quotations and then offer a range of interpretations of the claim or claims they are wanting to make.
(Full disclosure: Miroslav and Matthew are at Yale, and were there when I earned my doctorate; the former was a teacher, the latter a fellow student and friend. Take that for what it's worth. Here on out I'll call them V&C.)
Consider the following quotes (bolded emphases all mine):
Is it truly a condition of theology done well that the person making the theological proposals be herself (even somewhat) faithful either to Christ or to her understanding of Christ's will? Is such faithfulness, moreover, a legitimate criterion for evaluating said proposals—so that, if we knew of the theologian's utter unfaithfulness (even attempted), such knowledge would thereby falsify or disqualify her proposals outright?
I remain unpersuaded either that V&C really mean to make either of these claims or that either of them is a good idea.
It seems to me that V&C are making a materially prescriptive argument—"this is how theology ought to be done and how theologians ought to understand their work"—underwritten by a generically descriptive argument—"the sort of practice theology is and the sort of subject it is about means necessarily that it is self-involving in a manner different from algebra or astronomy"—but not anything more. We should not, I repeat not, include our judgments of the character of theologians' lives in our evaluation of their ideas, proposals, and arguments. If a serial adulterer were to write an essay against adultery, and meant it (i.e., it was not an exercise in deception), the thesis, the reasons offered in support, and the argument as a whole would not be correctly evaluated in connection with the author's sins. They would stand or fall on the merits. Such an author is precisely analogous to the comparison V&C make to the nutritionist: she is not wrong to recommend fruits and vegetables; she is merely a hypocrite.
And here's the kicker: All theologians are hypocrites. That's what makes them uniformly unsaintly, even those canonized after the fact. For saints are recognized postmortem, not in their lifetime. And that for good reason.
(I should add: It's even odder, in my view, to say that theologians' work should be judged in accordance with the affinity between their lives and their ideas, rather than their lives and the gospel as such. Barth and Tillich and Yoder, for example, all offered ample justification in their work for their misdeeds. Properly understood, however, their actions were wrong and unjustifiable regardless of the reasons they offered, precisely because they are and ought to be measured against that which is objective—the moral law, the will of God—not their own subjective understanding of it or their rationalization in the face of its challenge.)
So it is true that there should be an affinity between theologian's lives and ideas. Theologians of Christ should imitate Christ in their lives. And it is plausible to believe that their theology might improve as a result: that their vision into the things of God might prove clearer as a consequence.
But the unfaithful write good and true theology, too, and have done so since time immemorial. We ought to consider such theology in exactly the way we do all theology. For it is up to us to judge the theology only. God will judge the theologian.
(Full disclosure: Miroslav and Matthew are at Yale, and were there when I earned my doctorate; the former was a teacher, the latter a fellow student and friend. Take that for what it's worth. Here on out I'll call them V&C.)
Consider the following quotes (bolded emphases all mine):
- "execution of the central theological task requires a certain kind of affinity between the life the theologian seeks to articulate and the life the theologian seeks to lead." (118)
- "an affinity between theologians' lives and the basic vision of the true life that they seek to articulate is a condition of the adequacy of their thought." (119)
- "It would be incongruous for theologians to articulate and commend as true a life that they themselves had no aspiration of embracing. They would then be a bit like a nutritionist who won't eat her fruits and vegetables while urging her patients to do so." (120)
- "Misalignment between lives and visions ... is prone to undermine the veracity of [theologians'] work because it hinders their ability to adequately perceive and articulate these vision." (120)
- "living a certain kind of life doesn't determine the perception and articulation of visions, but only exerts significant pressure on them." (120)
- "Just as reasons, though important, don't suffice to embrace a vision of the good life, so reasons, though even more important, don't suffice to discern how to live it out. Our contention is that an abiding aspirational alignment of the self with the vision and its values is essential as well." (122)
- "[it is a requirement] that there be affinity between the kind of life theologians aspire to live and the primary vision they seek to articulate." (122)
- "Only those who are and continue to be 'spiritual' can ... perceive 'spiritual things.'" (125)
- "[An ideal but impossible claim would be] that only the saints can potentially be true theologians." (129)
- "Consequently, we argue for an affinity, rather than a strict homomorphy, of theologians' lives with the primary Christian vision of flourishing (always, of course, an affinity with the primary vision as they understand it)." (129)
- "Imperfect lives, imperfect articulations of the true life—yet lives that strive to align themselves with Christ's—and articulation that, rooted in this transformative striving, seek to serve Christ's mission to make the world God's home: this sort of affinity of life with the true life is what's needed for theologians to do their work well." (134)
- "Truth seeking is a constitutive dimension of living the true life; and living the true life—always proleptically and therefore aspirationally—is a condition of the search for its truthful articulation." (137)
- The best theologian will be a saint, i.e., a baptized believer whose life is maximally faithful to Christ.
- All theologians ought to strive to be saints.
- All theologians ought to strive to align their lives with their articulated vision of faithfulness to Christ.
- Saints are likelier to be better theologians than those who are not.
- A necessary but not sufficient condition of faithful theology is sainthood, that is, faithfulness to Christ.
- A necessary but not sufficient condition of faithful theology is imperfect but real alignment between the life of a theologian and his or her articulation of faithfulness to Christ.
- One of the criteria for evaluating a theologian's proposals and arguments is the lived faithfulness to Christ on the part of the theologian in question.
- One of the criteria for evaluating a theologian's proposals and arguments is the alignment between that theologian's life with his or her articulation of faithfulness to Christ.
Is it truly a condition of theology done well that the person making the theological proposals be herself (even somewhat) faithful either to Christ or to her understanding of Christ's will? Is such faithfulness, moreover, a legitimate criterion for evaluating said proposals—so that, if we knew of the theologian's utter unfaithfulness (even attempted), such knowledge would thereby falsify or disqualify her proposals outright?
I remain unpersuaded either that V&C really mean to make either of these claims or that either of them is a good idea.
It seems to me that V&C are making a materially prescriptive argument—"this is how theology ought to be done and how theologians ought to understand their work"—underwritten by a generically descriptive argument—"the sort of practice theology is and the sort of subject it is about means necessarily that it is self-involving in a manner different from algebra or astronomy"—but not anything more. We should not, I repeat not, include our judgments of the character of theologians' lives in our evaluation of their ideas, proposals, and arguments. If a serial adulterer were to write an essay against adultery, and meant it (i.e., it was not an exercise in deception), the thesis, the reasons offered in support, and the argument as a whole would not be correctly evaluated in connection with the author's sins. They would stand or fall on the merits. Such an author is precisely analogous to the comparison V&C make to the nutritionist: she is not wrong to recommend fruits and vegetables; she is merely a hypocrite.
And here's the kicker: All theologians are hypocrites. That's what makes them uniformly unsaintly, even those canonized after the fact. For saints are recognized postmortem, not in their lifetime. And that for good reason.
(I should add: It's even odder, in my view, to say that theologians' work should be judged in accordance with the affinity between their lives and their ideas, rather than their lives and the gospel as such. Barth and Tillich and Yoder, for example, all offered ample justification in their work for their misdeeds. Properly understood, however, their actions were wrong and unjustifiable regardless of the reasons they offered, precisely because they are and ought to be measured against that which is objective—the moral law, the will of God—not their own subjective understanding of it or their rationalization in the face of its challenge.)
So it is true that there should be an affinity between theologian's lives and ideas. Theologians of Christ should imitate Christ in their lives. And it is plausible to believe that their theology might improve as a result: that their vision into the things of God might prove clearer as a consequence.
But the unfaithful write good and true theology, too, and have done so since time immemorial. We ought to consider such theology in exactly the way we do all theology. For it is up to us to judge the theology only. God will judge the theologian.
About that Episode IX trailer
Everything I wrote here stands, but man alive is that a beautiful trailer.
It's a reminder of what Abrams gets: emotion, character, rapport, scale, energy, world-building, and—as this trailer not-misleadingly reminds us—composition and cinematography.
It's a reminder of what Abrams gets: emotion, character, rapport, scale, energy, world-building, and—as this trailer not-misleadingly reminds us—composition and cinematography.
I'm not optimistic, but it's not hopeless just yet. No matter what, it will be an experience. The only question is whether he'll be able to stick the landing.
A few comments and predictions while we're at it.
1. I genuinely appreciate how much Abrams has withheld from us. I only wish he'd kept that shot of faux-dark Rey from us.
2. Rey will not "go dark." That brief glimpse is a Force vision (false or future), a clone, her under control, or her under cover. No other alternatives.
3. I appreciate that Abrams is giving us—at least suggesting he is giving us—some follow-up and resolution to the Finn–Rey relationship. Given their separation in VIII, it will be interesting to see how immediate and intimate the quality of their rapport is in IX.
4. The movie will be beautiful, epic, and capital-F Fun.
5. I've stayed away from spoilers (I know nothing but what's in the three trailers), but my guess is that Rey and Ren team up for some reason or purpose for at least part of the film, and that during that team-up Rey submits to being trained by Ren. One idea: they both realize the Emperor's plan and/or learn of some MacGuffin that is crucial to it and agree that it is better to join against him/it than lose divided. Perhaps either she or he deceives the other to induce the alliance; perhaps not.
6. I'm left wondering how Abrams will fit the preexisting footage of Leia into the narrative. If VII was Han's goodbye and VIII was Luke's, IX was going to be Leia's: first the father, then the uncle, then the mother. (Perhaps that is how Rey persuades Ren to join her? To follow her to the ruins of the second Death Star to discover what she sought there?)
7. A final prediction: The climactic scene of Rey before Palpatine will feature a sort of Jedi cloud of witnesses: Force ghosts from all the previous 8 movies combined, present and bearing witness to Rey and/or Ren's final showdown with the Great Sith Evil who has haunted all nine movies—the Skywalker Saga from start to finish. That means Qui-Gonn, (Mace?), Obi-Wan, Yoda, Anakin, (Han?), Luke, Leia: I bet we see all of them, and a few more. Perhaps them in front, and hosts upon hosts behind them. Take it to the bank, y'all.
8. A final final prediction: The make-or-break derivative-or-break-the-cycle question is whether Abrams will have Ren break good right at the end, exactly the way his father did. I'm more concerned with the story playing out beat for beat in imitation of VI than whether Abrams ret-cons Rey's identity or some such thing. I would say, "Surely he knows better," except he's the one who thought remaking IV with a new MOAR BIG Death Star was a great idea ... so who knows.
Just 56 days. Oh, and I'm bringing my oldest: his first Star Wars movie in the theater. Come on, J.J. old buddy, don't let me down: you're my only hope.
A clarification on the NBA, China, and free speech
"Free speech" is a legal concept: whether the state in any way muzzles one's ability to speak or whether it responds punitively based on the content of one's speech.
Within civil society, an organization (for profit or not) is not a "player" in the realm of free speech. Organizations place all kinds of controls on one's speech within the workplace and, in certain respects, outside of it. These can be reasonable or unreasonable; they can fairly or unfairly applied. But they are run of the mill, and have no bearing on "free speech."
Whether or not Daryl Morey is disciplined or even fired by the NBA for his tweet in support of Hong Kong has nothing to do with free speech. This isn't a free-market point, along the lines of "the NBA is free to do whatever it likes; it's a business, and Morey is an employee." That's technically true, but not my point.
Let me put it this way. To respond to the crisis elicited by Morey's tweet with the claim either that the NBA is mitigating his free speech by apologizing to China or that the NBA would be suppressing his free speech if it disciplined or fired him is a non sequitur. The legal freedom of expression accorded to Morey as an American citizen is untouched by the NBA's response to him.
But more important, the NBA and the entire ecology of fans, writers, and commentary that surrounds it wants the NBA to retain the ability to discipline its employees for certain kinds of speech. Five years ago Adam Silver terminated Donald Sterling's ownership of the Los Angeles Clippers based on a recording of something he said privately to another person. What he said was in no way illegal. What it was, rather, was immoral. And the NBA ecosystem responded, rightly, by calling for his removal from the league. That was a good and necessary thing to do. But it, too, was not an infringement upon Sterling's freedom of speech, even as it was a direct disciplinary response to private speech, offered freely, subsequently made public.
If an owner or a player were to tweet or write or say aloud something similar to Sterling's racist comments, I have no doubt that (a) he would be disciplined and (b) the NBA "community" would applaud the disciplinary act. Which means not only that the NBA has this power and that this power bears no relationship to free speech. Above all, it means nobody wants the NBA to lack this power.
The issue in the Morey–China Kerfuffle, then, is a matter, not of free speech, but of ethics. It's a moral question. And the political is contained within the moral.
The moral question is whether it is right for the NBA to muzzle the public speech of one of its employees regarding an international situation wherein there is a clear morally correct position, when to affirm that position will entail loss of revenue for the league in the millions or billions of dollars.
The related political question is whether the NBA is being consistent—in moral terms, hypocritical—in encouraging its employees to engage in public speech regarding domestic issues that are highly controversial within the nation, when such speech is unlikely to cost the league any loss of revenue while also discouraging the aforementioned revenue-losing political speech.
The question beneath that last political question is an interesting one, and it's less related either to ethics or to capital. That question is: What is the range of acceptable political positions the NBA or any similar organization is willing to permit to be expressed publicly without disciplinary response? Accordingly, what are those concrete political positions the public expression of which would (rightly or wrongly) call forth censure, financial penalty, suspension, or termination?
I anticipate that the next battle along these lines will be closer to home, both literally and figuratively, manifesting just outside of the League's particular Overton Window; and that that battle, though it will involve less money, will be far more bitter than the present one.
Within civil society, an organization (for profit or not) is not a "player" in the realm of free speech. Organizations place all kinds of controls on one's speech within the workplace and, in certain respects, outside of it. These can be reasonable or unreasonable; they can fairly or unfairly applied. But they are run of the mill, and have no bearing on "free speech."
Whether or not Daryl Morey is disciplined or even fired by the NBA for his tweet in support of Hong Kong has nothing to do with free speech. This isn't a free-market point, along the lines of "the NBA is free to do whatever it likes; it's a business, and Morey is an employee." That's technically true, but not my point.
Let me put it this way. To respond to the crisis elicited by Morey's tweet with the claim either that the NBA is mitigating his free speech by apologizing to China or that the NBA would be suppressing his free speech if it disciplined or fired him is a non sequitur. The legal freedom of expression accorded to Morey as an American citizen is untouched by the NBA's response to him.
But more important, the NBA and the entire ecology of fans, writers, and commentary that surrounds it wants the NBA to retain the ability to discipline its employees for certain kinds of speech. Five years ago Adam Silver terminated Donald Sterling's ownership of the Los Angeles Clippers based on a recording of something he said privately to another person. What he said was in no way illegal. What it was, rather, was immoral. And the NBA ecosystem responded, rightly, by calling for his removal from the league. That was a good and necessary thing to do. But it, too, was not an infringement upon Sterling's freedom of speech, even as it was a direct disciplinary response to private speech, offered freely, subsequently made public.
If an owner or a player were to tweet or write or say aloud something similar to Sterling's racist comments, I have no doubt that (a) he would be disciplined and (b) the NBA "community" would applaud the disciplinary act. Which means not only that the NBA has this power and that this power bears no relationship to free speech. Above all, it means nobody wants the NBA to lack this power.
The issue in the Morey–China Kerfuffle, then, is a matter, not of free speech, but of ethics. It's a moral question. And the political is contained within the moral.
The moral question is whether it is right for the NBA to muzzle the public speech of one of its employees regarding an international situation wherein there is a clear morally correct position, when to affirm that position will entail loss of revenue for the league in the millions or billions of dollars.
The related political question is whether the NBA is being consistent—in moral terms, hypocritical—in encouraging its employees to engage in public speech regarding domestic issues that are highly controversial within the nation, when such speech is unlikely to cost the league any loss of revenue while also discouraging the aforementioned revenue-losing political speech.
The question beneath that last political question is an interesting one, and it's less related either to ethics or to capital. That question is: What is the range of acceptable political positions the NBA or any similar organization is willing to permit to be expressed publicly without disciplinary response? Accordingly, what are those concrete political positions the public expression of which would (rightly or wrongly) call forth censure, financial penalty, suspension, or termination?
I anticipate that the next battle along these lines will be closer to home, both literally and figuratively, manifesting just outside of the League's particular Overton Window; and that that battle, though it will involve less money, will be far more bitter than the present one.
Meghan O'Gieblyn on the church's market-based failures
"Despite all the affected teenage rebellion, I continued to call myself a
Christian into my early twenties. When I finally stopped, it wasn’t
because being a believer made me uncool or outdated or freakish. It was
because being a Christian no longer meant anything. It was a label to
slap on my Facebook page, next to my music preferences. The gospel
became just another product someone was trying to sell me, and a paltry
one at that because the church isn’t Viacom: it doesn’t have a
Department of Brand Strategy and Planning. Staying relevant in late
consumer capitalism requires highly sophisticated resources and the
willingness to tailor your values to whatever your audience wants. In
trying to compete in this market, the church has forfeited the one
advantage it had in the game to attract disillusioned youth:
authenticity. When it comes to intransigent values, the profit-driven
world has zilch to offer. If Christian leaders weren’t so ashamed of
those unvarnished values, they might have something more attractive than
anything on today’s bleak moral market. In the meantime, they’ve lost
one more kid to the competition."
—Meghan O'Gieblyn, "Sniffing Glue," originally published in Guernica (2011), now collected in Interior States: Essays (2018)
—Meghan O'Gieblyn, "Sniffing Glue," originally published in Guernica (2011), now collected in Interior States: Essays (2018)
Seven thoughts on life without Twitter
1. It's very nice. Today's day five, and while I still have the impulse to tweet, I can feel it lessening by the hour.
2. It's less cluttered. Twitter is a sinkhole for time, a place to go and get lost, even for 15 minutes. Without that reliable time-suck, I've been doing more life-giving things, or even just plain productive activities—or just letting myself be bored. That, too, is better than the infinite scroll.
3. Twitter, it turns out, is ubiquitous. I encounter disembedded—or rather, embedded—tweets in a variety of forms: through a simple Google search, through shared links, through articles, through newsletters, through news reporting, etc. It's a healthy reminder of how entangled Twitter is with our national discourse, and actually suggests that Twitter plays a more central role in folks' daily intake (however passive) than raw counts of profiles and time on the platform itself would suggest. (Though that role would have more to do with Twitter as a medium and less to do with the culture of the Extremely Online who inhabit it continuously.)
4. The strongest urge I have to resist is "seeing what people are saying about X." Occasionally that might be edifying. But nine times out of ten it would not. Existing outside the loop, or arriving late to some bit of news, commentary, or piece of writing, is a perfectly healthy state of being. Compulsively attempting to avoid it at all costs is decidedly unhealthy.
5. I do miss using Twitter as an RSS feed. I'm sure a few articles have slipped through the cracks this week. Oh well. The dozen or so websites I visit each day on top of the newsletters that deliver recommended reading should mostly do the job. And even here, I remind myself: four out of five articles I see recommended I save to InstaPaper and never find the time to read. There's just too much out there. Might as well lessen the flow of the spigot anyway.
6. I wrote something on Monday that I realized I could not then and there, on my self-assigned rules, tweet out to folks. I think this Saturday I will amend my rules to permit tweeting out a link without being "on" Twitter. Fortunately this blog has a Twitter icon that will automatically tweet the URL out on one's feed without even having to go to Twitter's website oneself. I'll start doing that next week.
7. I'm already a prolix monologuer prone to soapboxes—I'm in the classroom 12 hours a week, for goodness' sake; I've got to have something to say!—and Twitter does not aid in mitigating that tendency. It's cotton candy for People With Thoughts. And this week, I've had thoughts on a bunch of random stuff, not least, e.g., the NBA-China debacle. But as it turns out, those thoughts are (at least at the moment) only tweet-size. They're candy bars of thought. Mostly brief commentary, studded with righteous condemnation and varied attempts at humor. For whose benefit would I offer such empty calories? Not mine. Not others. Not the topic itself. Is the goal to go viral? No. To turn up the volume on the noise? No. To start a conversation? Maybe. But why not have that conversation face to face, or via email, rather than "in public"—saddled with the dopamine-inducing gambling tricks of Silicon Valley? No thanks. I'm just not that important, or interesting. Which is what Twitter et al want to hide from us at all costs. But it's true, and the sooner we all learn to live with that fact, the sooner we'll be at peace.
2. It's less cluttered. Twitter is a sinkhole for time, a place to go and get lost, even for 15 minutes. Without that reliable time-suck, I've been doing more life-giving things, or even just plain productive activities—or just letting myself be bored. That, too, is better than the infinite scroll.
3. Twitter, it turns out, is ubiquitous. I encounter disembedded—or rather, embedded—tweets in a variety of forms: through a simple Google search, through shared links, through articles, through newsletters, through news reporting, etc. It's a healthy reminder of how entangled Twitter is with our national discourse, and actually suggests that Twitter plays a more central role in folks' daily intake (however passive) than raw counts of profiles and time on the platform itself would suggest. (Though that role would have more to do with Twitter as a medium and less to do with the culture of the Extremely Online who inhabit it continuously.)
4. The strongest urge I have to resist is "seeing what people are saying about X." Occasionally that might be edifying. But nine times out of ten it would not. Existing outside the loop, or arriving late to some bit of news, commentary, or piece of writing, is a perfectly healthy state of being. Compulsively attempting to avoid it at all costs is decidedly unhealthy.
5. I do miss using Twitter as an RSS feed. I'm sure a few articles have slipped through the cracks this week. Oh well. The dozen or so websites I visit each day on top of the newsletters that deliver recommended reading should mostly do the job. And even here, I remind myself: four out of five articles I see recommended I save to InstaPaper and never find the time to read. There's just too much out there. Might as well lessen the flow of the spigot anyway.
6. I wrote something on Monday that I realized I could not then and there, on my self-assigned rules, tweet out to folks. I think this Saturday I will amend my rules to permit tweeting out a link without being "on" Twitter. Fortunately this blog has a Twitter icon that will automatically tweet the URL out on one's feed without even having to go to Twitter's website oneself. I'll start doing that next week.
7. I'm already a prolix monologuer prone to soapboxes—I'm in the classroom 12 hours a week, for goodness' sake; I've got to have something to say!—and Twitter does not aid in mitigating that tendency. It's cotton candy for People With Thoughts. And this week, I've had thoughts on a bunch of random stuff, not least, e.g., the NBA-China debacle. But as it turns out, those thoughts are (at least at the moment) only tweet-size. They're candy bars of thought. Mostly brief commentary, studded with righteous condemnation and varied attempts at humor. For whose benefit would I offer such empty calories? Not mine. Not others. Not the topic itself. Is the goal to go viral? No. To turn up the volume on the noise? No. To start a conversation? Maybe. But why not have that conversation face to face, or via email, rather than "in public"—saddled with the dopamine-inducing gambling tricks of Silicon Valley? No thanks. I'm just not that important, or interesting. Which is what Twitter et al want to hide from us at all costs. But it's true, and the sooner we all learn to live with that fact, the sooner we'll be at peace.
Sorting nationalism and patriotism with John Lukacs
One of the most curious things in the last few years has been the reinvigoration of the term "nationalism" as a political signifier and "nationalist" as a self-identification. In both scholarly and popular Christian discourse, at least, this is curious especially because, so far as I can tell, "nationalism" became in the last few decades a consensus word for the extreme, blasphemous, and/or heretical corruption of the virtue of patriotism. I have books on my shelf—one for college freshmen, another for graduate students, another for the broader reading public, another for fellow academics—all of which trade on this settled usage.
Now "nationalism" is back, not just as a historical-political force but as a terminological boundary marker. Unfortunately, though, its political associations as well as its function as a football in ideological disputes have contributed to something less than clarity. So that, e.g., to be nationalist is to be for "America first," or in less loaded terms, to be committed to one's fellow citizens and immediate neighbors in lieu of foreign adventurism and nation-building abroad. Or, e.g., to affirm that Christians can be nationalists means little more than that Christians can affirm the modern project of the nation-state, the regional boundaries within which such a state exists, and the groups and goods and cultural endeavors internal to that state. Or, e.g., even just to be happy in one's given national context and to be proud of its accomplishments and civic life.
That's quite the range. It seems to me that "patriotism" is a perfectly fine term for the last example. And the second-to-last example does not make one a nationalist in the prescriptive sense; it merely means that one accepts and/or approves of there being nations (of this sort) at all. It seems to me that "nationalism" should retain the stronger—not to say (yet) the inherently pejorative—terminological definition and concomitant evocations and allusions. Or else we're just going to be loose in our language and keep talking past one another.
There is no better thinker from whom to learn about nationalism defined in strict terms than John Lukacs, the Jewish-Catholic Hungarian-American immigrant and historian who died earlier this year at the age of 95. His 2005 book Democracy and Populism: Fear and Hatred is one of the crucial texts for understanding our moment. A helpful byproduct is lucidity regarding terms, their histories, and their political uses and connotations.
Let me close with a sample set of quotations on the topic of nationalism. I commend the book along with Lukacs's voluminous output to any and all who find themselves interested by this (pp. 35-36, 71-73; my bold print, for emphasis):
"Soon after 1870 there appeared something else: a phenomenon whose evidences, here and there, were there earlier, but the breadth and the substance and the character of which began to change. This was modern and populist nationalism. Yet 'nationalisme' and 'nationaliste' became French words only after 1880; in Britain, too, they had appeared not much earlier. The reason for this relatively late gestation of the nationalist word was that 'patriot' and 'patriotism' already existed; and, at least for a while, it seemed that the meaning of the latter was sufficient. When, a century earlier, Samuel Johnson uttered his famous (and perhaps forever valid) dictum that Patriotism Is The Last Refuge Of A Scoundrel, he meant nationalism, even though that word did not yet exist. One of the reasons why there exists no first-rate book about the history of nationalism is that it is not easy to separate it from old-fashioned patriotism. And these two inclinations, patriotism and nationalism, divergent as they may be, still often overlap in people's minds. (When, for example, Americans criticize a 'superpatriot,' what they really mean is an extreme nationalist.) Nonetheless, the very appearance of a new word is always evidence that some people sense the need to distinguish it from the older word's meaning: that a nationalist is someone different from a patriot.
"Patriotism is defensive; nationalism is aggressive. Patriotism is the love of a particular land, with its particular traditions; nationalism is the love of something less tangible, of the myth of a 'people,' justifying many things, a political and ideological substitute for religion. Patriotism is old-fashioned (and, at times and in some places, aristocratic); nationalism is modern and populist. In one sense patriotic and national consciousness may be similar; but in another sense, more and more apparent after 1870, national consciousness began to affect more and more people who, generally, had been immune to that before—as, for example, many people within the multinational empire of Austria-Hungary. It went deeper than class consciousness. Here and there it superseded religious affiliations, too.
"After 1870 nationalism, almost always, turned antiliberal, especially where liberalism was no longer principally nationalist. ...
"The state was one of the creations of the Modern Age. Its powers grew; here and there, sooner or later, it became monstrously bureaucratic. Yet—and few people see this, very much including those who prattle about 'totalitarianism'—the power of the state has been weakening, at the same time the attraction of nationalism has not.
"Hitler knew that: I have, more than once, cited his sentence from Mein Kampf recalling his youth: 'I was a nationalist; but I was not a patriot.' Again it is telling that in Austria 'national' and 'nationalist' meant pro-German, and not only during the multinational Habsburg monarchy and state. Well before the Second World War an Austrian 'nationalist' wanted some kind of union with Germany, at the expense of an independent Austrian state. This was also true in such diverse places as Norway or Hungary or other states during the Second World War: 'national' and 'nationalist' often meant pro-German.
"Nationalism, rather than patriotism; the nation rather than the state; populism rather than liberal democracy, to be sure. We have examples of that even among the extremist groups in the United States, too, with their hatred of 'government'—that is, of the state. We have seen that while true patriotism is defensive, nationalism is aggressive; patriotism is the love of a particular land, with its particular traditions; nationalism is the love of something less tangible, of the myth of a 'people,' justifying everything, a political and ideological substitute for religion; both modern and populist. An aristocratic nationalism is an oxymoron, since at least after the late seventeenth century most European aristocracies were cosmopolitan as well as national. Democratic nationalism is a later phenomenon. For a while there was nothing very wrong with that. It won great revolutions and battles, it produced some fine examples of national cohesion. One hundred and fifty years ago a distinction between nationalism and patriotism would have been labored, it would have not made much sense. Even now nationalism and patriotism often overlap within the minds and hearts of many people. Yet we must be aware of their differences—because of the phenomenon of populism which, unlike old-fashioned patriotism, is inseparable from the myth of a people. Populism is folkish, patriotism is not. One can be a patriot and cosmopolitan (certainly culturally so). But a populist is inevitably a nationalist of sorts. Patriotism is less racist than is populism. A patriot will not exclude a person of another nationality from a community where they have lived side by side and whom he has known for many years; but a populist will always be suspicious of someone who does not seem to belong to his tribe.
"A patriot is not necessarily a conservative; he may even be a liberal—of sorts, though not an abstract one. In the twentieth century a nationalist could hardly be a liberal. The nineteenth century was full of liberal nationalists, some of them inspiring and noble figures. The accepted view is that liberalism faded and declined because of the appearance of socialism, that the liberals who originally had reservations about exaggerated democracy became democrats and then socialists, accepting the progressive ideas of state intervention in the economy, education, welfare. This is true but not true enough. It is nationalism, not socialism, that killed the liberal appeal. The ground slipped out from under the liberals not because they were not sufficiently socialist but because they were (or at least seemed to be) insufficiently nationalist.
"Since it appeals to tribal and racial bonds, nationalism seems to be deeply and atavistically natural and human. Yet the trouble with it is not only that nationalism can be antihumanist and often inhuman but that it also proceeds from one abstract assumption about human nature itself. The love for one's people is natural, but it is also categorical; it is less charitable and less deeply human than the love for one's country, a love that flows from traditions, at least akin to a love of one's family. Nationalism is both self-centered and selfish—because human love is not the love of oneself; it is the love of another. (A convinced nationalist is suspicious not only of people he sees as aliens; he may be even more suspicious of people of his own ilk and ready to denounce them as 'traitors'—that is, people who disagree with his nationalist beliefs.) Patriotism is always more than merely biological—because charitable love is human and not merely 'natural.' Nature has, and shows, no charity."
Now "nationalism" is back, not just as a historical-political force but as a terminological boundary marker. Unfortunately, though, its political associations as well as its function as a football in ideological disputes have contributed to something less than clarity. So that, e.g., to be nationalist is to be for "America first," or in less loaded terms, to be committed to one's fellow citizens and immediate neighbors in lieu of foreign adventurism and nation-building abroad. Or, e.g., to affirm that Christians can be nationalists means little more than that Christians can affirm the modern project of the nation-state, the regional boundaries within which such a state exists, and the groups and goods and cultural endeavors internal to that state. Or, e.g., even just to be happy in one's given national context and to be proud of its accomplishments and civic life.
That's quite the range. It seems to me that "patriotism" is a perfectly fine term for the last example. And the second-to-last example does not make one a nationalist in the prescriptive sense; it merely means that one accepts and/or approves of there being nations (of this sort) at all. It seems to me that "nationalism" should retain the stronger—not to say (yet) the inherently pejorative—terminological definition and concomitant evocations and allusions. Or else we're just going to be loose in our language and keep talking past one another.
There is no better thinker from whom to learn about nationalism defined in strict terms than John Lukacs, the Jewish-Catholic Hungarian-American immigrant and historian who died earlier this year at the age of 95. His 2005 book Democracy and Populism: Fear and Hatred is one of the crucial texts for understanding our moment. A helpful byproduct is lucidity regarding terms, their histories, and their political uses and connotations.
Let me close with a sample set of quotations on the topic of nationalism. I commend the book along with Lukacs's voluminous output to any and all who find themselves interested by this (pp. 35-36, 71-73; my bold print, for emphasis):
"Soon after 1870 there appeared something else: a phenomenon whose evidences, here and there, were there earlier, but the breadth and the substance and the character of which began to change. This was modern and populist nationalism. Yet 'nationalisme' and 'nationaliste' became French words only after 1880; in Britain, too, they had appeared not much earlier. The reason for this relatively late gestation of the nationalist word was that 'patriot' and 'patriotism' already existed; and, at least for a while, it seemed that the meaning of the latter was sufficient. When, a century earlier, Samuel Johnson uttered his famous (and perhaps forever valid) dictum that Patriotism Is The Last Refuge Of A Scoundrel, he meant nationalism, even though that word did not yet exist. One of the reasons why there exists no first-rate book about the history of nationalism is that it is not easy to separate it from old-fashioned patriotism. And these two inclinations, patriotism and nationalism, divergent as they may be, still often overlap in people's minds. (When, for example, Americans criticize a 'superpatriot,' what they really mean is an extreme nationalist.) Nonetheless, the very appearance of a new word is always evidence that some people sense the need to distinguish it from the older word's meaning: that a nationalist is someone different from a patriot.
"Patriotism is defensive; nationalism is aggressive. Patriotism is the love of a particular land, with its particular traditions; nationalism is the love of something less tangible, of the myth of a 'people,' justifying many things, a political and ideological substitute for religion. Patriotism is old-fashioned (and, at times and in some places, aristocratic); nationalism is modern and populist. In one sense patriotic and national consciousness may be similar; but in another sense, more and more apparent after 1870, national consciousness began to affect more and more people who, generally, had been immune to that before—as, for example, many people within the multinational empire of Austria-Hungary. It went deeper than class consciousness. Here and there it superseded religious affiliations, too.
"After 1870 nationalism, almost always, turned antiliberal, especially where liberalism was no longer principally nationalist. ...
"The state was one of the creations of the Modern Age. Its powers grew; here and there, sooner or later, it became monstrously bureaucratic. Yet—and few people see this, very much including those who prattle about 'totalitarianism'—the power of the state has been weakening, at the same time the attraction of nationalism has not.
"Hitler knew that: I have, more than once, cited his sentence from Mein Kampf recalling his youth: 'I was a nationalist; but I was not a patriot.' Again it is telling that in Austria 'national' and 'nationalist' meant pro-German, and not only during the multinational Habsburg monarchy and state. Well before the Second World War an Austrian 'nationalist' wanted some kind of union with Germany, at the expense of an independent Austrian state. This was also true in such diverse places as Norway or Hungary or other states during the Second World War: 'national' and 'nationalist' often meant pro-German.
"Nationalism, rather than patriotism; the nation rather than the state; populism rather than liberal democracy, to be sure. We have examples of that even among the extremist groups in the United States, too, with their hatred of 'government'—that is, of the state. We have seen that while true patriotism is defensive, nationalism is aggressive; patriotism is the love of a particular land, with its particular traditions; nationalism is the love of something less tangible, of the myth of a 'people,' justifying everything, a political and ideological substitute for religion; both modern and populist. An aristocratic nationalism is an oxymoron, since at least after the late seventeenth century most European aristocracies were cosmopolitan as well as national. Democratic nationalism is a later phenomenon. For a while there was nothing very wrong with that. It won great revolutions and battles, it produced some fine examples of national cohesion. One hundred and fifty years ago a distinction between nationalism and patriotism would have been labored, it would have not made much sense. Even now nationalism and patriotism often overlap within the minds and hearts of many people. Yet we must be aware of their differences—because of the phenomenon of populism which, unlike old-fashioned patriotism, is inseparable from the myth of a people. Populism is folkish, patriotism is not. One can be a patriot and cosmopolitan (certainly culturally so). But a populist is inevitably a nationalist of sorts. Patriotism is less racist than is populism. A patriot will not exclude a person of another nationality from a community where they have lived side by side and whom he has known for many years; but a populist will always be suspicious of someone who does not seem to belong to his tribe.
"A patriot is not necessarily a conservative; he may even be a liberal—of sorts, though not an abstract one. In the twentieth century a nationalist could hardly be a liberal. The nineteenth century was full of liberal nationalists, some of them inspiring and noble figures. The accepted view is that liberalism faded and declined because of the appearance of socialism, that the liberals who originally had reservations about exaggerated democracy became democrats and then socialists, accepting the progressive ideas of state intervention in the economy, education, welfare. This is true but not true enough. It is nationalism, not socialism, that killed the liberal appeal. The ground slipped out from under the liberals not because they were not sufficiently socialist but because they were (or at least seemed to be) insufficiently nationalist.
"Since it appeals to tribal and racial bonds, nationalism seems to be deeply and atavistically natural and human. Yet the trouble with it is not only that nationalism can be antihumanist and often inhuman but that it also proceeds from one abstract assumption about human nature itself. The love for one's people is natural, but it is also categorical; it is less charitable and less deeply human than the love for one's country, a love that flows from traditions, at least akin to a love of one's family. Nationalism is both self-centered and selfish—because human love is not the love of oneself; it is the love of another. (A convinced nationalist is suspicious not only of people he sees as aliens; he may be even more suspicious of people of his own ilk and ready to denounce them as 'traitors'—that is, people who disagree with his nationalist beliefs.) Patriotism is always more than merely biological—because charitable love is human and not merely 'natural.' Nature has, and shows, no charity."
A Twitter trial
I'm reconsidering my presence on Twitter. I wrote earlier this year about why, for the time being, I was still on the platform. But as I said (via tweet) yesterday, I'm not long for that website. Let me lay out, briefly, why that is, and the experiment I'm going to undertake in the coming weeks.
1. Even with the comparatively limited time I spend on Twitter, I find during working and non-online hours that it burrows too deeply into my skull. I have a thought or read a great line and think, "I should tweet that out." Or I do tweet something out, and 40 minutes later I think, "I should put down this book and see if anyone's responded." That's crazy and unhealthy. Best to be off entirely.
2. Even having removed Twitter from my phone, even blocking access to it on my laptop for long stretches using Freedom, I still open up my computer too often wanting to "check in," and more often than not I end up getting sucked in for 10 minutes instead of 5, 20 minutes instead of 10, and so on. No más, por favor.
3. I'm persuaded that Twitter is bad for writers. Though it is good for connecting writers to one another and to editors and publications—I've certainly benefited from that—it is a terrible wastrel of a parasite on the writing mind and the writing process. It sucks blood from the writer's intelligence, wit, and courage. It also encourages a kind of anticipatory conformity and fear. I'm tired of seeing that in other writers, and I'm tired of resisting it in myself.
4. The effect on writers is a function of the larger Twitter Brain problem, according to which the Extremely Online mistake Twitter for real life, both in terms of the prevalence of certain views and in terms of their importance. But Twitter is not representative, nor is what the Twitterati considers important actually so. More often than not, it's a tempest in a teapot. And that, too, warps the mind as well as one's affections. No more.
5. Tech critics like Postman have convinced me of the power of form over content. The form is not neutral; Twitter is not a delivery system for otherwise untouched or unshaped material. And in this case, the medium intrinsically and necessarily distorts the message beyond repair. The infinite scroll of the timeline flattens out, de-contextualizes, and thereby trivializes everything that passes through it. All becomes meme. What is important becomes a football for play, and what is unimportant generates rage, mockery, hatred, and division. Twitter is a hothouse for the formation of vice; it detests, slanders, and butchers virtue wherever it is found. Nothing good can come from a means of communication that sets cat memes next to articles investigating child abuse next to sports GIFs next to the brother of a murder victim forgiving his murderer next to a spit-flecked thread arguing over the existence of eternal conscious torment next to a recipe for gluten-free lasagna next to a GoFundMe for a child with severe brain trauma next to a tweet about impeachment by the President. I repeat: Nothing good.
6. Not to mention that people are getting harassed or losing their jobs over their activity on public (and "private") social media. Why take the risk?
7. Add do that how companies like Twitter (and Facebook, my account on which I have deleted; and Google, my account on which I have not—that will be next in these tech-wise reflections) are profiting off our data in ways legal and only semi-legal but certainly immoral, harmful, and deceptive. That is what makes them "free": we are selling ourselves to be online, engaging in activities that are bad for ourselves and bad for others. There's a word for that, y'all.
8. Perhaps there is no healthy future for life online, but I am certain that there is no healthy future for life online that includes Twitter and Facebook. And if I think that, why prop it up? My exit won't make a difference, true. But if these companies are a brothel and we're paying the lease with our time, I'll spend my time somewhere else, thank you very much.
So here's what I'm going to try, in lieu of immediately (rashly?) deleting my account for good:
1. I will remain signed out of Twitter all week except for Saturday.
2. I will sign in to Twitter and "be" on there for a maximum of 30 minutes on Saturday.
3. When signed in, I will not retweet, like, or reply to other tweets.
4. When signed in, I will not tweet "thoughts" or the like. I will, instead, do one of two things. I will tweet out links either to things I have written or to things I have read and are worth sharing with others.
And the following are matters I'm still deliberating about:
5. Whether or not to delete all past tweets, so as to re-shape my Twitter profile into a kind of static "online hub" for folks to find me, discover who I am, see what I've written, and to follow links there either to my blog, to my Academia.edu page, or to my contact info so as to get in touch directly.
6. Whether or not to communicate via DMs or to make my email address clear enough for folks who'd like to contact me that way.
7. Whether or not, during the week, while signed out, to treat a handful of Twitter profiles as if they are RSS feeds meant to share links of pieces worth reading. I can imagine this being a healthy way of using Twitter against its wishes. But we'll see, since the whole point is to be off Twitter entirely during the week. And I wouldn't want to compulsively check Profile X throughout the day. For now I think I'll limit it to Saturdays, with the exception of one or two profiles (maybe, maybe, maybe).
As you can tell, I'm still in the middle of this. My mind's not quite made up yet. I may end up deleting my account entirely by year's end. Or I may discover some other mode of minimal-to-no usage. We shall see. I'll report back here later, as I always do.
1. Even with the comparatively limited time I spend on Twitter, I find during working and non-online hours that it burrows too deeply into my skull. I have a thought or read a great line and think, "I should tweet that out." Or I do tweet something out, and 40 minutes later I think, "I should put down this book and see if anyone's responded." That's crazy and unhealthy. Best to be off entirely.
2. Even having removed Twitter from my phone, even blocking access to it on my laptop for long stretches using Freedom, I still open up my computer too often wanting to "check in," and more often than not I end up getting sucked in for 10 minutes instead of 5, 20 minutes instead of 10, and so on. No más, por favor.
3. I'm persuaded that Twitter is bad for writers. Though it is good for connecting writers to one another and to editors and publications—I've certainly benefited from that—it is a terrible wastrel of a parasite on the writing mind and the writing process. It sucks blood from the writer's intelligence, wit, and courage. It also encourages a kind of anticipatory conformity and fear. I'm tired of seeing that in other writers, and I'm tired of resisting it in myself.
4. The effect on writers is a function of the larger Twitter Brain problem, according to which the Extremely Online mistake Twitter for real life, both in terms of the prevalence of certain views and in terms of their importance. But Twitter is not representative, nor is what the Twitterati considers important actually so. More often than not, it's a tempest in a teapot. And that, too, warps the mind as well as one's affections. No more.
5. Tech critics like Postman have convinced me of the power of form over content. The form is not neutral; Twitter is not a delivery system for otherwise untouched or unshaped material. And in this case, the medium intrinsically and necessarily distorts the message beyond repair. The infinite scroll of the timeline flattens out, de-contextualizes, and thereby trivializes everything that passes through it. All becomes meme. What is important becomes a football for play, and what is unimportant generates rage, mockery, hatred, and division. Twitter is a hothouse for the formation of vice; it detests, slanders, and butchers virtue wherever it is found. Nothing good can come from a means of communication that sets cat memes next to articles investigating child abuse next to sports GIFs next to the brother of a murder victim forgiving his murderer next to a spit-flecked thread arguing over the existence of eternal conscious torment next to a recipe for gluten-free lasagna next to a GoFundMe for a child with severe brain trauma next to a tweet about impeachment by the President. I repeat: Nothing good.
6. Not to mention that people are getting harassed or losing their jobs over their activity on public (and "private") social media. Why take the risk?
7. Add do that how companies like Twitter (and Facebook, my account on which I have deleted; and Google, my account on which I have not—that will be next in these tech-wise reflections) are profiting off our data in ways legal and only semi-legal but certainly immoral, harmful, and deceptive. That is what makes them "free": we are selling ourselves to be online, engaging in activities that are bad for ourselves and bad for others. There's a word for that, y'all.
8. Perhaps there is no healthy future for life online, but I am certain that there is no healthy future for life online that includes Twitter and Facebook. And if I think that, why prop it up? My exit won't make a difference, true. But if these companies are a brothel and we're paying the lease with our time, I'll spend my time somewhere else, thank you very much.
So here's what I'm going to try, in lieu of immediately (rashly?) deleting my account for good:
1. I will remain signed out of Twitter all week except for Saturday.
2. I will sign in to Twitter and "be" on there for a maximum of 30 minutes on Saturday.
3. When signed in, I will not retweet, like, or reply to other tweets.
4. When signed in, I will not tweet "thoughts" or the like. I will, instead, do one of two things. I will tweet out links either to things I have written or to things I have read and are worth sharing with others.
And the following are matters I'm still deliberating about:
5. Whether or not to delete all past tweets, so as to re-shape my Twitter profile into a kind of static "online hub" for folks to find me, discover who I am, see what I've written, and to follow links there either to my blog, to my Academia.edu page, or to my contact info so as to get in touch directly.
6. Whether or not to communicate via DMs or to make my email address clear enough for folks who'd like to contact me that way.
7. Whether or not, during the week, while signed out, to treat a handful of Twitter profiles as if they are RSS feeds meant to share links of pieces worth reading. I can imagine this being a healthy way of using Twitter against its wishes. But we'll see, since the whole point is to be off Twitter entirely during the week. And I wouldn't want to compulsively check Profile X throughout the day. For now I think I'll limit it to Saturdays, with the exception of one or two profiles (maybe, maybe, maybe).
As you can tell, I'm still in the middle of this. My mind's not quite made up yet. I may end up deleting my account entirely by year's end. Or I may discover some other mode of minimal-to-no usage. We shall see. I'll report back here later, as I always do.