Resident Theologian
About the Blog
Writing without a platform
Reflections on the possibilities of writing today without creating and maintaining an online "platform" via social media.
Is it possible? That’s what I’m wondering.
I can be a moralistic scold about social media—I’m aware. I’m also aware that, for many writers, social media feels like the one and only way to reach, much less build, an audience; to make a name for oneself in a time when anyone on earth can publish millions of words and just about no one pays for the privilege to read them.
I myself, for a time, benefited from social media. I was on Twitter from 2013 to 2022, with maximal usage coming in the span of years from 2015 to 2020. (Those dates are … interesting.) As it happens, I was ABD and dissertating from fall 2014 through spring 2017, then a newly hired professor starting later that fall. In other words, my Twitter usage peaked when (a) I was spending many hours daily staring at a laptop screen and (b) I was trying to get my life as a junior scholar and public writer off the ground. I got a handful of early writing gigs through Twitter and I made many more personal contacts through it, some of whom I still count as friends, colleagues, or nodes in my professional network.
That’s a long way of saying: I don’t have the luxury of strutting around on the moral high ground, looking down at folks building their platforms through X, IG, Substack, and YouTube. I did the very same thing, albeit to a lesser degree, and it undeniably helped my career, above all my career as a writer.
Hence the question. Is it possible, today, to write, to be a writer, without a platform?
A few thoughts.
First, credentials play a role. I was just telling an editor the other day that the academy is a backdoor into publishing books. My PhD opens doors. That’s a fact. Weirdly enough, since academic books aren’t bestsellers, it’s easier for me to creep my way into popular publishing than it is for someone who only wants to write popular books, since he or she has to make good from the jump. Or before the jump, in fact, through amassing followers and fans via “socials.”
Second, gender plays a role. I’ve written about this before, but the politics of women Christian writers was already complex before the rise of the internet and social media. Now it’s positively Byzantine. If you have a PhD, that’s one thing. If you’re employed in the industry—at a magazine, say, or at a publishing house—that’s another. If you just want to be a writer, though, your options for finding an audience and outlets are limited. If, further, you do not have a clear denominational or political tribe; and if, still further, you are not a culture warrior; and if, still further, you are not willing to post pictures of and share private information about your husband and children (assuming that you have them and that they are photogenic)—the circle just keeps getting narrower and narrower. I know exactly one contemporary female Christian writer who “broke through” without credentials, institutional home, tribal affiliation, or online platform, including Twitter. Otherwise one or more of these factors invariably determine the likelihood not only of getting written work into the world but of a sufficiently large audience finding it.
Third, expectations play a role. Almost no one makes an actual full-time living as a writer. Outside of those rare authors whose names we all know and who sell millions of books, writers either have a day job, or depend on a spouse’s income, or hustle like a maniac, or fundraise/crowdfund, or hit the speaker circuit, or live hand to mouth as a starving artist. Or they did one or more of these things for many years, probably decades, before reaching a threshold to just be able to pay their bills. This is not unjust. It’s just the way it is, and ever was it thus. Anger or resentment at lack of remuneration for the writing life is both a professional nonstarter and the product of a fantasy. A writer’s first rule is to live in the real world, and the real world doesn’t care about writers or what they write. The sooner one learns that, the sooner one can get started with what matters: the writing! Isn’t that what we’d be doing anyway, even if we knew we’d never get paid a dime?
Finally, the industry plays a role. This is the part where we get to complain. It’s common knowledge that trade presses use social media metrics as a gatekeeping mechanism. In plain speech, they ask first-time authors how many followers they have. If the answer is “a few thousand,” then they say “thanks for playing” and politely shut the door. If the answer is “zilch, because I’m not on social media,” then they laugh hysterically before slamming the door. (You can still hear them on the other side, doubled over in tears.) This is, it goes without saying, a new phenomenon, since social media is a new phenomenon. And writers eager to break through have followed these incentives to their logical conclusion: drumming up an online following by every means possible: Facebook, Twitter, Instagram, TikTok, LinkedIn, YouTube, Spotify, Substack, Threads—you name it, they’re there. Posting, re-posting, replying, commenting, replying again, sharing, re-sharing, streaming, recording in the car, recording on the run, recording with the kids, walk and talks, live tweets, and more. Always on, extremely online, creating memes, mocking memes, revising memes: keeping the content coming, letting the spice flow. And eventually, with a pinch of success come subscription deals, and after these come sponsorships, and after these come ads. And before you know it, you’re celebrating the free swag you got in the mail or reading an on-air advertisement for skin cream.
How’d you end up here?
That’s the question you should be asking. That’s the question I’m trying to pose in my repeated missives against social media. It’s why, although I’m anti-anti-Substack, and I’m no longer stridently anti-podcast, I’m still hesitant about the knock-on effects of podcasts’ ubiquity, and on certain days, if I’m honest, I’m anti-anti-anti-Substack.
What I mean is: Substack is an ecosystem, and one of the ways it forms both writers and readers is to make every writer a digital entrepreneur hawking a product. Further, it encourages a relationship between writer and readership on the model of celebrity fandom. (After all, you gotta give the people what they want.)
Put these together and the model becomes that of the influencer. The podcasting live-streaming YouTuber with a newsletter and a Patreon is a single genus—the hustling entrepreneurial influencer with fans in the hundreds, thousands, or more—of which Christians, including writers, become only one more species. They are different from Kim Kardashian and MrBeast only in degree, not in kind.
I’ve written elsewhere that there are wise, thoughtful people doing this in ways I admire, in service to the church. They’re digital lectors taking the gospel to an entire generation of (to be frank; I love them) uncatechized functional illiterates addicted to digital technology, and God be praised they’re finding a hearing. I don’t retract what I wrote. But we are fooling ourselves if we don’t step back and see clearly what is happening, what the nature of the dynamic is. Writers are being co-opted by the affordances of newsletters, social media, and audio/visual recording and streaming in ways that corrode the essence of good writing as well as the vocation of the writer itself.
A writer is not an influencer. To the extent that participating in any of these dynamics is necessary for a writer to get started or to get published, then by definition it can’t be avoided. But if it is necessary, we should see it as a necessary evil. Evil in the sense that it is a threat to the very thing one is seeking to serve, to indwell, celebrate, and dilate: the life of the mind, the reading life, the life of putting words on the page that are apt to reality and true to human nature and beautiful in their form and honoring to God. Exhaustively maintaining an online platform inhibits and enervates the attention, the focus, the literacy, the patience, the quietness, and the prayers that make the Christian writing life not only possible, but good.
In a word: If writing without a platform is impossible, then treat it like Wittgenstein’s ladder. Use it to get where you’re going, then kick it over once it’s done the job.
It costs you nothing not to be on social media
One of my biannual public service announcements regarding social media.
Consider this your friendly reminder that signing up for social media is not mandatory. It costs nothing not to be on it. Life without the whole ensemble—TikTok, Twitter, Instagram, Facebook, and the rest—is utterly free.
In fact, it is simpler not to be on social media, inasmuch as it requires no action on your part, only inaction. If you don’t create an account, no account will be made for you. You aren’t auto-registered, the way you’re assigned a social security number or drafted in the military. You have to apply and be accepted, like a driver’s license or church membership. Fail to apply and nothing happens. And I’m here to tell you, it is a blessed nothingness.
That’s the trick with social media: nothing comes from nothing. Give it nothing and it can take nothing from you.
Supposedly, being on social media is free. But you know that’s not true. It costs you time—hours of it, in fact, each and every day. It costs you attention. It costs you the anxiety it induces. It costs you the ability to do or think about anything else when nothing exactly is demanding your focus at the moment. It costs you the ability to read for more than a few minutes at a time. It costs you the ability to write without strangers’ replies bouncing like pinballs around your head. It costs you the freedom to be ignorant and therefore free of the latest scandal, controversy, fad, meme, or figure of speech that everyone knew last week but no one will remember next week.
Thankfully, social media has no particular relationship to what is called “privilege.” It does not take money to be off social media any more than it takes money to be on it. It is not the privileged who have the freedom not to be on social media: it is everyone. Because, as I will not scruple to repeat, even at the risk of annoyance or redundancy, it costs nothing not to be on social media. And since it costs nothing for anyone, it therefore costs nothing for everyone. Unfortunately, the costs of being on social media do apply to everyone, privileged or not, which is why everyone would be better off deleting their accounts.
Imagine a world without social media. It isn’t ancient. It isn’t biblical. It’s twenty years ago. Are you old enough to remember life then? It wasn’t a hellscape, not in this respect at least. The hellscape is social media. And social media hasn’t, not yet, become a badge of “digital citizenship” required by law of every man, woman, and child, under penalty of fine or loss of employment. Until then, so long as it’s free, do the right thing and stay off—or, if you’re already on, get off first and then stay off.
Here’s the good news, but tell me if you’ve heard it before: It won’t cost you a thing.
In defense of podcasts
A response to some idiot’s rant from a couple years back.
A few years ago some idiot on the internet wrote that he was quitting podcasts, and you should too. I can’t imagine what he was thinking. Podcasts are a pleasure.
They’re a pleasure to listen to, because they run the gamut. They’re about anything, everything, and nothing. They can be bite-size; they can appetize; they can tease. Or they can last for hours, leaving no nook or cranny unexplored. They can remain at the surface for beginners or they can dive in the deeps for experts.
Podcasts can cover philosophy, theology, history, politics, and ethics; they can also cover basketball, film, TV, music, and novels. They can pay six-figure salaries and they can sprout up tomorrow by a bro in his basement. They embody a democratic media and a free press and free speech all at the same time. What’s not to love?
They’re also a pleasure to go on. Once a month or so I get invited onto a podcast, and every single time it’s a blast. I’ve never joined a bad one! Apparently they’re all fun. We laugh, we talk theology or technology or academia; we learn something in the exchange; the recording goes up a few days later; and it’s there, more or less forever, for others to listen in on at their leisure. Just this week a shook-his-hand-once acquaintance at my (not small) congregation came up to me to tell me he enjoyed a podcast I was on. (Kudos to you, Kaitlyn; he’s a big fan.) Like so many others, this thirtysomething Christian listens to strangers talk theology for the layman while washing the dishes, or driving to work, or taking a walk. And why not?
I’m just glad these things are already so popular, or else that idiot’s rant might have made an unwelcome dent, or even popped the bubble. Life is short. Let’s enjoy its little pleasures while we can. And there can be no doubt that podcasts are among them.
I joined Micro.blog!
Why I joined + thoughts on Micro.blog > Twitter et al.
After years of hearing Alan Jacobs sing the praises of Micro.blog, I created an account this week. Not only that, I’m able to host my micro blog on this website’s domain; so instead of eastbrad.micro.blog, the URL is micro.bradeast.org. In fact, I added “Micro” as an option on the header menu above, sandwiched in between “Media” and “Blog.” In a sense you’re technically “leaving” this site, but it doesn’t feel like it. In this I was also following Alan’s lead. Thank you, ayjay “own your own turf” dot org!
Now: Why did I join micro.blog? Don’t I already have enough to do? Don’t I already write enough? Isn’t my goal to be offline as much as possible? Above all, wasn’t I put on earth to do one name thing, namely, warn people away from the evils of Twitter? Aren’t I the one who gave it up in June 2020, deactivated it for Lent in spring 2022, then (absent-mindedly) deleted it a year later by not renewing the account? And didn’t regret it one bit? Don’t I think Twitter and all its imitators (Threads, Notes, et al) unavoidably addict their users in the infinite scroll while optimizing for all the worst that original sin has to offer?
What, in a word, makes micro blogging (and Micro.blog in particular) different?
Here’s my answer, in three parts: why I wanted to do this; how I’m going to use it; and what Micro.blog lacks that makes it distinct from the alternatives.
First, I miss what Twitter offered me: an accessible public repository of links, images, brief commentary, and minor thoughts—thoughts I had nowhere else to put except Twitter, and thoughts that invariably get lost in the daily shuffle. I tend to call this main blog (the one you’re reading right now) a space for “mezzo blogging”: something between Twitter/Tumblr (i.e., micro writing and sharing) and essays, articles, and books (i.e., proper macro writing). I suffer from graphomania, and between my physical notebook and texting with friends, I still have words to get out of my system; minus all the nonsense on Twitter, the reason I stayed as long as I did was that. (Also the connections, friends, and networking, but the downsides of gaining those things were and are just too great, on any platform.)
Second, I am going to use my micro blog in a certain way. I’m not going to follow anyone. I’m not going to look at my timeline. I’m not going to let it even show me follows, mentions, or replies. It’s not going to be a place for interaction with others. I’m not going to dwell or hang out on it. In a sense I won’t even be “on” it. I have and will have no way of knowing if even a single soul on earth reads, clicks, or finds my writing there. It exists more or less for one person: me. Its peripheral audience is anyone who cares to click from here to there or check in on me there from time to time.
What am I going to be doing, then? Scribbling thoughts that run between one and four sentences long; sharing links to what I’m reading online; sharing books and images of what I’m reading IRL; in short, putting in a single place the grab bag of “minor” writing that pulls me daily in a hundred directions: email, messages, WhatsApp, even Slack (once upon a time). E.g., right now I’m enthralled by the NBA playoffs, but not only does no one who reads this blog care about that; my thoughts are brief, ephemeral, and fleeting. But I have them, and I want to remember what they were! So now I put them there, on the micro blog.
I don’t, for what it’s worth, have any kind of organizational system for note-taking, journaling, or any such thing. I do keep a physical journal, but it’s mostly a place for first-draft brainstorming; it’s not much of an archive. I don’t use Drafts or Tot or Notes or Scrivener or even an iPad or tablet of any kind. Nothing is housed on the cloud; nothing is interconnected, much less interoperable. I’ve always toyed with trying Evernote—I know people who love it—but it’s just never appealed to me, and I don’t think I’m the type who would benefit from it or use it well. My mental habits and ideas and writing instincts are too diffuse. At the same time, I love the idea of a one-stop shop for little thoughts, for minor scribbles, in brief, for micro blogging. That’s how I used Twitter. I ultimately just got fed up with that broken platform’s pathologies.
So, third, what makes Micro.blog different? In a sense I’ve already answered that question. It’s not built to do what Twitter, Threads, and Substack Notes are meant to do. There’s no provocation or stimulation. There’s no hellish algorithm. It doesn’t scale. It’s not about followers or viral hits. It’s self-selecting, primarily because you have to pay for it and secondarily because it’s not a way to build an audience of thousands (much less millions). It’s for people like me who want a digital room of their own, so to speak, without the assault on my attention, or the virus of virality, or the infinite scroll, or the stats (follows, like, RTs) to stroke or shrink my ego, or the empty promise that the more I post the more books I’ll eventually sell. No publisher or agent is going to tout my Micro.blog to justify an advance. It’s just … there. For me, and max, for a few other dozen folks.
And anyway, I’m giving it a 30-day free trial. No commitments made just yet. I already like it enough that I expect to fork over $5/month for the privilege. But we’ll see.
Either way, this is all one long way of saying: See, I’m no Luddite. I use Squarespace and Instapaper and Firefox and Spotify and Libby and Letterboxd and now Micro.blog. I might even get to ten whole quality platforms one day.
Clearly, I don’t hate the internet. I’m just picky.
A.I., TikTok, and saying “I would prefer not to”
Finding wisdom in Bartleby for a tech-addled age.
Two technology pieces from last week have stuck with me.
Both were at The New York Times. The first was titled “How TikTok Changed America,” a sort of image/video essay about the platform’s popularity and influence in the U.S. The second was a podcast with Ezra Klein called “How Should I Be Using A.I. Right Now?,” an interview with Ethan Mollick.
To be clear, I skimmed the first and did not listen to the second; I only read Klein’s framing description for the pod (my emphases):
There’s something of a paradox that has defined my experience with artificial intelligence in this particular moment. It’s clear we’re witnessing the advent of a wildly powerful technology, one that could transform the economy and the way we think about art and creativity and the value of human work itself. At the same time, I can’t for the life of me figure out how to use it in my own day-to-day job.
So I wanted to understand what I’m missing and get some tips for how I could incorporate A.I. better into my life right now. And Ethan Mollick is the perfect guide…
This conversation covers the basics, including which chatbot to choose and techniques for how to get the most useful results. But the conversation goes far beyond that, too — to some of the strange, delightful and slightly unnerving ways that A.I. responds to us, and how you’ll get more out of any chatbot if you think of it as a relationship rather than a tool.
These two pieces brought to mind two things I’ve written recently about social media and digital technology more broadly. The first comes from my New Atlantic essay, published two years ago, reviewing Andy Crouch’s book The Life We’re Looking For (my emphases again):
What we need is a recommitment to public argument about purpose, both ours and that of our tools. What we need, further, is a recoupling of our beliefs about the one to our beliefs about the other. What we need, finally, is the resolve to make hard decisions about our technologies. If an invention does not serve the human good, then we should neither sell it nor use it, and we should make a public case against it. If we can’t do that — if we lack the will or fortitude to say, with Bartleby, We would prefer not to — then it is clear that we are no longer makers or users. We are being used and remade.
The other comes late in my Commonweal review, published last summer, of Tara Isabella Burton’s book Self Made:
It may feel to some of us that “everyone,” for example, is on Instagram. Only about 15 percent of the world is on the platform, however. That’s a lot of people. Yet the truth is that most of the world is not on it. The same goes for other social media. Influencer culture may be ubiquitous in the sense that most people between the ages of fifteen and thirty-five are affected by it in some way. But that’s a far cry from digitally mediated self-creation being a universal mandate.
Even for those of us on these apps, moreover, it’s possible to opt out. You don’t have to sell yourself on the internet. You really don’t. I would have liked Burton to show us why the dismal story she tells isn’t deterministic—why, for example, not every young woman is fated to sell her image on OnlyFans sooner or later.
The two relevant phrases from these essay reviews: You really don’t and Bartleby’s I would prefer not to. They are quite simply all you need in your toolkit for responding to new technologies like TikTok and generative A.I.
For example, the TikTok piece states that half of Americans are on the app. That’s a lot! Plenty to justify the NYT treatment. I don’t deny it. But do you know what that claim also means? That half of us aren’t on it. Fifty percent. One out of every two souls. Which is the more relevant statistic, then? Can I get a follow-up NYT essay about the half of us who not only aren’t tempted to download TikTok but actively reject it, can’t stand it, renounce it and all its pomp?
The piece goes further: “Even if you’ve never opened the app, you’ve lived in a culture that exists downstream of what happens there.” Again, I don’t deny it or doubt it. It’s true, to my chagrin. And yet, the power of such a claim is not quite what it seems on first glance.
The downstream-influence of TikTok works primarily if and as one is also or instead an active user of other social media platforms (as well as, perhaps, cable news programs focused on politics and entertainment). I’m told you can’t get on YouTube or Instagram or Twitter or Facebook without encountering “imported” content from TikTok, or “local” content that’s just Meta or Google cribbing on TikTok. But what if, like me, you don’t have an account on any of these platforms? What if you abstain completely from all social media? And what if you don’t watch Fox News or MSNBC or CNN or entertainment shows or reality TV?
I was prepared, reading the NYT piece, to discover all the ways TikTok had invaded my life without my even realizing it. It turns out, though, that I don’t get my news from TikTok, or my movie recommendations, or my cooking recipes, or my fashion advice(!), or my politics, or my Swiftie hits, or my mental health self-diagnoses, or my water bottle, or my nightly entertainment before bed—or anything else. Nothing. Nada. Apparently I have been immune to the fifteen “hottest trends” on TikTok, the way it invaded “all of our lives.”
How? Not because I made it a daily goal to avoid TikTok. Not because I’m a digital ascetic living on a compound free of wireless internet, smart phones, streaming TV, and (most important) Gen Z kiddos. No, it’s because, and more or less only because, I’m not on social media. Turns out it isn’t hard to get away from this stuff. You just don’t download it. You just don’t create an account. If you don’t, you can live as if it doesn’t exist, because for all intents and purposes, for your actual life, it doesn’t.
As I said: You really don’t have to, because you can just say I would prefer not to. All told, that’s enough. It’s adequate all on its own. No one is forcing you to do anything.
Which brings us to Ezra Klein.
Sometimes Klein seems like he genuinely “gets” the scale of the threat, the nature of the digital monstrosity, the power of these devices to shape and rewire our brains and habits and hearts. Yet other times he sounds like just another tech bro who wants to maximize his digital efficiencies, to get ahead of the masses, to get a silicon leg up on the competition, to be as early an adopter as possible. I honestly don’t get it. Does he really believe the hype? Or does he not. At least someone like Tyler Cowen picks a lane. Come join the alarmist train, Ezra! There’s plenty of room! All aboard!
Seriously though, I’m trying to understand the mindset of a person who asks aloud with complete sincerity, “How should I incorporate A.I. into my life ‘better’?” It’s the “should” that gets me. Somehow this is simultaneously a social obligation and a moral duty. Whence the ought? Can someone draw a line for me from this particular “is” to Klein’s technological ought?
In any case, the question presumes at least two things. First, that prior to A.I. my life was somehow lacking. Second, that just because A.I. exists, I need to “find a place for it” in my daily habits.
But why? Why would we ever grant either of these premises?
My life wasn’t lacking anything before ChatGPT made its big splash. I wasn’t feeling an absence that Sam Altman could step in to fill. There is no Google-shaped hole in my heart. As a matter of fact, my life is already full enough: both in the happy sense that I have a fulfilling life and in the stressful sense that I have too much going on in my life. As John Mark Comer has rightly pointed out, the only way to have more of the former is through having less of the latter. Have more by having less; increase happiness by jettisoning junk, filler, hurry, hoarding, much-ness.
Am I really supposed to believe that A.I.—not to mention an A.I. duplicate of myself in order (hold gag reflex) to know myself more deeply (I said hold it!) in ways I couldn’t before—is not just one more damn thing to add to my already too-full life? That it holds the secrets of self-knowledge, maximal efficiency, work flow, work–life balance, relational intimacy, personal creativity, and labor productivity? Like, I’m supposed to type these words one after another and not snort laugh with derision but instead take them seriously, very seriously, pondering how my life was falling short until literally moments ago, when A.I. entered my life?
It goes without saying that, just because the technology exists, I don’t “need” to adopt or incorporate it into my life. There is no technological imperative, and if there were it wouldn’t be categorical. The mere existence of technology is neither self-justifying nor self-recommending. And must I add that devoting endless hours of time, energy, and attention to learning this latest invention, besides stealing those hours from other, infinitely more meaningful pursuits, will undeniably be superseded and almost immediately made redundant by the fact that this invention is nowhere near completion? Even if A.I. were going to improve daily individual human flourishing by a hundredfold, the best thing to do, right now, would be absolutely nothing. Give it another year or ten or fifty and they’ll iron out the kinks, I’m sure of it.
What this way of approaching A.I. has brought home to me is the unalterably religious dimension of technological innovation, and this in two respects. On one side, tech adepts and true believers approach innovation not only as one more glorious step in the march of progress but also as a kind of transcendent or spiritual moment in human growth. Hence the imperative. How should I incorporate this newfangled thing into my already tech-addled life? becomes not just a meaningful question but an urgent, obvious, and existential one.
On the other side, those of us who are members of actual religious traditions approach new technology with, at a minimum, an essentially skeptical eye. More to the point, we do not approach it expecting it to do anything for our actual well-being, in the sense of deep happiness or lasting satisfaction or final fulfillment or ultimate salvation. Technology can and does contribute to human flourishing but only in its earthly, temporal, or penultimate aspects. It has nothing to do with, cannot touch, never can and never will intersect with eternity, with the soul, with the Source and End of all things. Technology is not, in short, a means of communion with God. And for those of us (not all religious people, but many) who believe that God has himself already reached out to us, extending the promise and perhaps a partial taste of final beatitude, then it would never occur to us—it would present as laughably naive, foolish, silly, self-deceived, idolatrous—to suppose that some brand new man-made tool might fix what ails us; might right our wrongs; might make us happy, once and for all.
It’s this that’s at issue in the technological “ought”: the “religion of technology.” It’s why I can’t make heads of tails of stories or interviews like the ones I cited above. We belong to different religions. It may be that there are critical questions one can ask about mine. But at least I admit to belonging to one. And, if I’m being honest, mine has a defensible morality and metaphysics. If I weren’t a Christian, I’d rather be just about anything than a true believing techno-optimist. Of all religions on offer today, it is surely the most self-evidently false.
Screentopia
A rant about the concern trolls who think the rest of us are too alarmist about children, screens, social media, and smartphones.
I’m grateful to Alan for writing this post so I didn’t have to. A few additional thoughts, though. (And by “a few thoughts” I mean rant imminent.)
Let me begin by giving a term to describe, not just smartphones or social media, but the entire ecosystem of the internet, ubiquitous screens, smartphones, and social media. We could call it Technopoly or the Matrix or just Digital. I’ll call it Screentopia. A place-that-is-no-place in which just about everything in our lives—friendship, education, finance, sex, news, entertainment, work, communication, worship—is mediated by omnipresent interlinked personal and public devices as well as screens of every size and type, through which we access the “all” of the aforementioned aspects of our common life.
Screentopia is an ecosystem, a habitat, an environment; it’s not one thing, and it didn’t arrive fully formed at a single point in time. It achieved a kind of comprehensive reach and maturity sometime in the last dozen years.
Like Alan, I’m utterly mystified by people who aren’t worried about this new social reality. Or who need the rest of us to calm down. Or who think the kids are all right. Or who think the kids aren’t all right, but nevertheless insist that the kids’ dis-ease has little to nothing to do with being born and raised in Screentopia. Or who must needs concern-troll those of us who are alarmed for being too alarmed; for ascribing monocausal agency to screens and smartphones when what we’re dealing with is complex, multicausal, inscrutable, and therefore impossible to fix. (The speed with which the writer adverts to “can’t roll back the clock” or “the toothpaste ain’t going back in the tube” is inversely proportional to how seriously you have to take him.)
After all, our concern troll asks insouciantly, aren’t we—shouldn’t we be—worried about other things, too? About low birth rates? And low marriage rates? And kids not playing outside? And kids presided over by low-flying helicopter parents? And kids not reading? And kids not dating or driving or experimenting with risky behaviors? And kids so sunk in lethargy that they can’t be bothered to do anything for themselves?
Well—yes! We should be worried about all that; we are worried about it. These aren’t independent phenomena about which we must parcel out percentages of our worry. It’s all interrelated! Nor is anyone—not one person—claiming a totality of causal explanatory power for the invention of the iPhone followed immediately by mass immiseration. Nor still is anyone denying that parents and teachers and schools and churches are the problem here. It’s not a “gotcha” to counter that kids don’t have an issue with phones, parents do. Yes! Duh! Exactly! We all do! Bonnie Kristian is absolutely right: parents want their elementary and middle school–aged kids to have smartphones; it’s them you have to convince, not the kids. We are the problem. We have to change. That’s literally what Haidt et al are saying. No one’s “blaming the kids.” We’re blaming what should have been the adults in the room—whether the board room, the PTA meeting, the faculty lounge, or the household. Having made a mistake in imposing this dystopia of screens on an unsuspecting generation, we would like, kindly and thank you please, to fix the problem we ourselves made (or, at least, woke up to, some of us, having not been given a vote at the time).
Here’s what I want to ask the tech concern trolls.
How many hours per day of private scrolling on a small glowing rectangle would concern you? How many hours per day indoors? How many hours per day on social media? How many hours per day on video games? How many pills to get to sleep? How many hours per night not sleeping? How many books per year not read? How many friends not made, how many driver’s licenses not acquired, how many dates and hangouts not held in person would finally raise a red flag?
Christopher Hitchens once wrote, “The North Korean state was born at about the same time that Nineteen Eighty-Four was published, and one could almost believe that the holy father of the state, Kim Il Sung, was given a copy of the novel and asked if he could make it work in practice.” A friend of mine says the same about our society and Brave New World. I expect people have read their Orwell. Have they read their Huxley, too? (And their Bradbury? And Walter M. Miller Jr.? And…?) Drugs and mindless entertainment to numb the emotions, babies engineered and produced in factories, sex and procreation absolutely severed, male and female locked in perpetual sedated combat, books either censored or an anachronistic bore, screens on every wall of one’s home featuring a kind of continuous interactive reality TV (as if Real Housewives, TikTok, and Zoom were combined into a single VR platform)—it’s all there. Is that the society we want? On purpose? It seems we’re bound for it like our lives depended on it. Indeed, we’re partway there already. “Alarmists” and “Luddites” are merely the ones who see the cliff’s edge ahead and are frantically pointing at it, trying to catch everyone’s attention.
But apparently everyone else is having too much fun. Who invited these killjoys along anyway?
All together now: social media is bad for reading
A brief screed about what we all know to be true: social media is bad for reading.
We don’t have to mince words. We don’t have to pretend. We don’t have to qualify our claims. We don’t have to worry about insulting the youths. We don’t have to keep mum until the latest data comes in.
Social media, in all its forms, is bad for reading.
It’s bad for reading habits, meaning when you’re on social media you’re not reading a book. It’s bad for reading attention, meaning it shrinks your ability to focus for sustained periods of time while reading. It’s bad for reading desires, meaning it makes the idea of sitting down with a book, away from screens and images and videos and sounds, seem dreadfully boring. It’s bad for reading style, meaning what literacy you retain while living on social media is trained to like all the wrong things and to seek more of the same. It’s bad for reading ends, meaning you’re less likely to read for pleasure and more likely to read for strictly utilitarian reasons (including, for example, promotional deals and influencer prizes and so on). It’s bad for reading reinforcement, meaning like begets like, and inserting social media into the feedback loop of reading means ever more of the former and ever less of the latter. It’s bad for reading learning, meaning your inability to focus on dense, lengthy reading is an educational handicap: you quite literally will know less as a result. It’s bad for reading horizons, meaning the scope of what you do read, if you read at all, will not stretch across continents, cultures, and centuries but will be limited to the here and now, (at most) the latest faux highbrow novel or self-help bilge promoted by the newest hip influencers; social media–inflected “reading” is definitionally myopic: anti-“diverse” on principle. Finally, social media is bad for reading imitation, meaning it is bad for writing, because reading good writing is the only sure path to learning to write well oneself. Every single writing tic learned from social media is bad, and you can spot all of them a mile away.
None of this is new. None of it is groundbreaking. None of it is rocket science. We all know it. Educators do. Academics do. Parents do. As do members of Gen Z. My students don’t defend themselves to me; they don’t stick up for digital nativity and the wisdom and character produced by TikTok or Instagram over reading books. I’ve had students who tell me, approaching graduation, that they have never read a single book for pleasure in their lives. Others have confessed that they found a way to avoid reading a book cover to cover entirely, even as they got B’s in high school and college. They’re not proud of this. Neither are they embarrassed. It just is what it is.
Those of us who see this and are concerned by it do not have to apologize for it. We don’t have to worry about being, or being accused of being, Luddites. We’re not making this up. We’re not shaking our canes at the kids on the lawn. We’re not ageist or classist or generation-ist or any other nonsensical application of actual prejudices.
The problem is real. It’s not the only one, but it’s pressing. Social media is bad in general, it’s certainly bad for young people, and it’s unquestionably, demonstrably, and devastatingly bad for reading.
The question is not whether it’s a problem. The question is what to do about it.
A tech-attitude taxonomy
A taxonomy of eleven different dispositions to technological development, especially in a digital age.
I’ve been reading Albert Borgmann lately, and in one essay he describes a set of thinkers he calls “optimistic pessimists” about technology. It got me thinking about how to delineate different positions and postures on technology, particularly digital technology, over the last century. I came up with eleven terms, the sixth one serving as a middle, “neutral” point with five on each side—growing in intensity as they get further from the center. Here they are:
Hucksters: i.e., people who stand to profit from new technologies, or who work to spin and market them regardless of their detrimental effects on human flourishing.
Apostles: i.e., true believers who announce the gospel of new technology to the unconvinced; they win converts by their true faith and honest enthusiasm; they sincerely believe that any and all developments in technology are good and to be welcomed as benefiting the human race in the short-, medium-, and long-term.
Boosters: i.e., writers and journalists in media and academia who toe the line of the hucksters and apostles; they accuse critics and dissenters from the true faith of heresy or, worse, of being on the wrong side of history; they exist as cogs in the tech-evangelistic machine, though it’s never clear why they are so uncritical, since they are rarely either apostles or hucksters themselves.
Optimists: i.e., ordinary people who understand and are sympathetic with thoughtful criticisms of new technologies but who, at the end of the day, passively trust in progress, in history’s forward march, and in the power of human can-do spirit to make things turn out right, including the challenges of technology; they adopt new technology as soon as it’s popular or affordable.
Optimistic pessimists: i.e., trenchant and insightful critics of technopoly, or the culture wrought by technology, who nonetheless argue for and have confidence in the possibility of righting the ship (even, the ship righting itself); another term for this group is tech reformers.
Naive neutrals: i.e., people who have never given a second thought to the challenges or perils of technology, are fundamentally incurious about them, and have no “position” to speak of regarding the topic; in practice they function like optimists or boosters, but lack the presence of considered beliefs on the subject.
Pessimistic optimists: i.e., inevitabilists—this or that new technology may on net be worse for humanity, but there’s simply nothing to do about it; pushing back or writing criticism is for this group akin to a single individual blowing on a forest fire; technological change on this view is materialist and/or deterministic; at most, you try to see it for what it is and manage your own individual life as best you can; at the same time, there’s no reason to be Chicken Little, since this has always been humanity’s lot, and we always find a way to adapt and adjust.
Pessimists: i.e., deep skeptics who see technological development in broadly negative terms, granting that not all of it is always bad in all its effects (e.g., medicine’s improvement of health, extension of life spans, and protection from disease); these folks are the last to adopt a new technology, usually with resentment or exasperation; they hate hucksters and boosters; they are not determinists—they think human society really can make social and political choices about technology ordered toward the common good—but know that determinism almost always wins in practice; their pessimism pushes them to see the downsides or tradeoffs even in the “best” technological developments.
Doomsdayers: i.e., it’s all bad, all the time, and it’s clear as day to anyone with eyes to see and ears to hear; the internet is a bona fide harbinger of the apocalypse and A.I. is no-joke leading us to Skynet and the Matrix; the answer to new technology is always, therefore, a leonine Barthian Nein!; and any and all dissents and evidence to the contrary are only so much captivity to the Zeitgeist, heads stuck in the sand, paid-for shilling, or delusional “back to the land” Heidegerrian nostalgia that is impossible to live out with integrity in a digital age.
Opt-outers: i.e., agrarians and urban monastics in the spirit of Wendell Berry, Ivan Illich, and others who pursue life “off the grid” or at least off the internet; they may or may not be politically active, but more than anything they put their money where their mouth is: no TV or wireless internet in the home, no smart phone, no social media, and a life centered on hearth, earth, family, children, the local neighborhood, a farm or community garden, so on and so forth; they may be as critical as pessimists and doomsdayers, but they want to walk the walk, not just talk the talk, and most of all they don’t want the technopoly to dictate whether or not, in this, their one life, it can be a good one.
Resisters: i.e., leaders and foot soldiers in the Butlerian Jihad, whether this be only in spirit or in actual social, material, and political terms (IRL, as they say).
Cards on the table: I’m dispositionally somewhere between #7 and #8, with occasional emotional outbursts of #9, but aspirationally and even once in a while actually a #10.
Ten rules (for myself) that mitigate the timesuck that is the inbox.
Email is the scourge of just about everyone’s time and attention, at least those of us in the “laptop class” and the broader white-collar workforce. Some people’s jobs just are their inbox. But for academics, the inbox is the enemy. It’s a timesuck. It exerts a kind of gravitational pull on one’s mind and attention. It threatens to conquer every last second you might spend doing something else.
Here are some rules and practices I’ve instituted to manage my inbox.
No email on my phone. By this I mean not only that I lack the app, I also can’t log in on a browser. I literally do not, because I cannot, access my inbox (personal or professional) on my phone.
No email on the weekends. This rule’s looser, but I don’t reply or feel accountable to my inbox on the weekends. I mostly leave it be.
No email until lunchtime. Mornings in my house are harried swirls of chaos getting kids to school; I don’t check my inbox before leaving. When I get to my office, I pour some coffee, pray, then sit down in my recliner and read for 2-4 hours. My laptop remains closed and in my bag, barring an urgent matter or occasional need. I usually open it around 11:30am.
Inbox zero twice per day. While I munch on a salad, I take 15-30 minutes to whittle my emails down to zero. Two-thirds of this is deletion. For what remains, it’s a smattering of replies, archives, calendar notes, and snoozes. I do the same thing late afternoon, before I leave the office. Occasionally I’ll do it in the evening, at home, but I don’t plan on it.
Little to no email outside of normal working hours. I’m flexible on this one, but the rule is that I don’t make myself accountable to my inbox outside of the 8:00am–5:00pm range. If I receive an email then, it can wait till the next day. And if the inbox piles up outside of office hours, so be it.
As few newsletters as possible. I’ve slowly been unsubscribing from newsletters I read and transferring their feed to my RSS reader (I use Feedly). Some still come by email—I’m not sure how to pay for one without getting it via email!—but I either click on them immediately or skim and archive. I don’t want them just sitting there, cluttering up the place.
A quick brief reply is better than a delayed reply or non-reply. I still remember an email I sent to a distinguished academic in 2010. I was going to visit his campus to see if it might be a good fit for my doctoral studies. My email must have been a thousand words at least. His reply was a single incomplete sentence. Yes, he would meet with me. But it was so curt I thought he was mad. He wasn’t! He just didn’t have time to match my logorrhea. He had better things to do. And he was right. So when colleagues, students, or readers email me, I’ve trained myself to give them a speedy 1-2 sentence answer, even if it’s not as detailed as they’d like. Sometimes it’s just, “Let’s talk more in person,” or “I’ll have to think about that.” But if the question is concrete, I can typically answer in a single sentence. Brevity is preferable to silence!
Every personal email gets a reply. Except by mistake, I never ghost genuine emails from living human beings (unless, I suppose, there are living human beings behind A.I.-generated Ed Tech mass-mailers). Everyone who writes me by email receives a reply, full stop. That includes random readers of my work. Obviously I don’t have a sizable enough readership to make this infeasible; I assume Ken Follett can’t personally respond to every bit of fan mail he gets. But as long as it’s not unduly burdensome, I’m going to keep up this habit. And doubly so for emails from people I know, whether colleagues or friends or family. No email goes unanswered!
Every (initial) personal email gets a reply within 24 hours. This one might sound tough, but it’s really not. I’m a fast typist and I’m committed to being brief whenever possible. So once the spam and nonsense are out of the way, I reply until the inbox is clear. Then I do it again later that day (on workdays, that is). That way I stay up on it, and it doesn’t pile up to unmanageable levels. Sometimes, granted, I lack the time to do so, or I’m traveling, or an email is going to take up too much time for the length and thought required. So I email within the 24-hour period to say that I’ll be emailing tomorrow or later that week or once my midterm grading is done or after the holiday or after the semesters wraps up. That way I’m not leaving a correspondent hanging, even if I can’t get to them in a timely manner.
Strategic snoozing. The “snooze” button is your friend. It’s a great invention. Right now I have eight emails snoozed (and nothing in my inbox). Only two are emails I need to reply to. One is an ongoing correspondence about a paper I’ve given feedback on; the other will reappear in a couple weeks when I’m supposed to remind a colleague about something. So no one is waiting on me, exactly. Even when I’ve snoozed threads trying to find time to meet up for a meal or drinks, the person isn’t waiting on pins and needles; we both know it’s a busy time and we’ll figure something out in a month or two. Anything pressing has been dealt with; it’s the emails with longer deadline horizons, such as a recommendation letter request, or emails that function as self-reminders that call for strategic snoozing.
All this, I should say, is operative primarily during the academic calendar. I’m looser in the summer, for obvious reasons.
I should add as well that, unlike other techniques for tech management, I don’t feel constrained or stressed by these rules. They aren’t hard to keep. They’re shockingly easy, as a matter of fact. Not everyone is in similar circumstances, but something like these rules would work, I think, for most academics. Of perhaps all “laptop workers,” academics may be the most notorious for simply never returning emails, even important ones. It’s not that hard, folks! And rules like these actually keep you from being on email more. My average daily email time is 30-60 minutes. That’s doable. But I could stretch that out to just about anytime I’m in the office, if I kept my laptop open (or, when I’m writing, my email open). Were I to do that, then I’d see a new email demanding a reply every 10-20 minutes, in which case I’d never get around to anything else. It’s in service of getting around to other things—usually more important and always more interesting—that the rules are worth implementing and maintaining.
Quit social porn
Samuel James is right: the social internet is a form of pornography. That means Christians, at least, should get off—now.
In the introduction to his new book, Digital Liturgies: Rediscovering Christian Wisdom in an Online Age, Samuel James makes a startling claim: “The internet is a lot like pornography.” He makes sure the reader has read him right: “No, that’s not a typo. I did not mean to say that the internet contains a lot of pornography. I mean to say that the internet itself—i.e., its very nature—is like pornography. There’s something about it that is pornographic in its essence.”
Bingo. This is exactly right. But let’s take it one step further.
A few pages earlier, James distinguishes the internet in general from “the social internet.” That’s a broader term for what we usually refer to as “social media.” Think not only Facebook, Twitter, Instagram, TikTok, et al, but also YouTube, Slack, Pinterest, Snapchat, Tumblr, perhaps even LinkedIn or Reddit and similar sites. In effect, any online platform that (a) “connects” strangers through (b) public or semi-public personal profiles via (c) proprietary algorithms using (d) slot-machine reward mechanisms that reliably alter one’s (e) habits of attention and (f) fame, status, wealth, influence, or “brand.” Almost always such a platform also entails (g) the curation, upkeep, reiteration, and perpetual transformation of one’s visual image.
This is the social internet. James is right to compare it to pornography. But he doesn’t go far enough. It isn’t like pornography. It’s a mode of pornography.
The social internet is social porn.
By the end of the introduction, James pulls his punch. He doesn’t want his readers off the internet. Okay, fine. I’m on the internet too, obviously—though every second I’m not on it is a second of victory I’ve snatched from defeat. But yes, it’s hard to avoid the internet in 2023. We’ll let that stand for now.
There is no good reason, however, to be on the social internet. It’s porn, after all, as we just established. Christians, at least, have no excuse for using porn. So if James and I are right that the social internet isn’t just akin to pornography but is a species of it, then he and I and every other Christian we know who cares about these things should get off the social internet right now.
That means, as we saw above, any app, program, or platform that meets the definition I laid out. It means, at a minimum, deactivating and then deleting one’s accounts with Facebook, Twitter, Instagram, and TikTok—immediately. It then means thinking long and hard about whether one should be on any para-social platforms like YouTube or Pinterest or Slack. Some people use YouTube rarely and passively, to watch the occasional movie trailer or live band performance, say, or how-to videos to help fix things around the house. Granted, we shouldn’t be too worried about that. But what about people who use it the way my students use it—as an app on their phone with an auto-populated feed they scroll just like IG or TT? Or what about active users and influencers with their own channels?
Get off! That’s the answer. It’s porn, remember? And porn is bad.
I confess I have grown tired of all the excuses for staying on the social internet. Let me put that differently: I know plenty of people who do not share my judgment that the social internet is bad, much less a type of porn. In that case, we lack a shared premise. But many people accept the premise; they might even go so far as to affirm with me that the social internet is indeed a kind of porn: just as addictive, just as powerful, just as malformative, just as spiritually depleting, just as attentionally sapping. (Such claims are empirical, by the way; I don’t consider them arguable. But that’s for another day.) And yet most of the people I have in mind, who are some of the most well-read and up-to-date on the dangers and damages of digital media, continue not only to maintain their social internet accounts but use them actively and daily. Why?
I’m at a point where I think there simply are no more good excuses. Alan Jacobs remarked to me a few years back, when I was wavering on my Twitter usage, that the hellsite in question was the new Playboy. “I subscribe for the articles,” you say. I’m sure you do. That might play with folks unconcerned by the surrounding pictures. For Christians, though, the gig is up. You’re wading through waist-high toxic sludge for the occasional possible potential good. Quit it. Quit the social internet. Be done with it. For good.
Unlike Lot’s wife, you won’t look back. The flight from the Sodom of the social internet isn’t littered with pillars of salt. The path is free and clear, because everyone who leaves is so happy, so grateful, the only question they ask themselves is what took them so long to get out.
A decision tree for dealing with digital tech
Is the digital status quo good? If not, our actions (both personal and institutional) should show it.
Start with this question:
Do you believe that our, and especially young people’s, relationship to digital technology (=smartphones, screens, the internet, streaming, social media) is healthy, functional, and therefore good as is? Or unhealthy, dysfunctional, and therefore in need of immediate and drastic help?
If your answer is “healthy, functional, and good as is,” then worry yourself no more; the status quo is A-OK. If you answered otherwise, read on.
Now ask yourself this question:
Do the practices, policies, norms, and official statements of my institution—whether a family, a business, a university, or a church—(a) contribute to the technological problem, (b) maintain the digital status quo, or (c) interrupt, subvert, and cut against the dysfunctional relationship of the members of my institution to their devices and screens?
If your answer is (a) or (b) and yet you answered earlier that you believe our relationship to digital technology is in serious need of help, then you’ve got a problem on your hands. If your answer is (c), then well done.
Finally, ask yourself this:
How does my own life—the whole suite of my daily habits when no one’s looking, or rather, when everyone is looking (my spouse, my roommate, my children, my coworkers, my neighbors, my pastors, and so on)—reflect, model, and/or communicate my most basic beliefs about the digital status quo? Does the way I live show others that (a) I am aware of the problem (b) chiefly within myself and (c) am tirelessly laboring to respond to it, to amend my ways and solve the problem? Or does it evince the very opposite? So that my life and my words are unaligned and even contradictory?
At both the institutional and the personal level, it seems to me that answering these questions honestly and following them to their logical conclusions—not just in our minds or with our words but in concrete actions—would clarify much about the nature of our duties, demands, and decisions in this area of life.
Quitting the Big Five
Could you quit all the companies that make up Silicon Valley’s Big Five? How hard would it be to reduce your footprint down to only one of them?
In a course I teach on digital tech and Christian practice, I walk through an exercise with students. I ask them to name the Big Five (or more) Silicon Valley companies that so powerfully define and delimit our digital lives. They can also name additional apps and platforms that take up time and space in their daily habits. I then ask them:
Supposing you continued to use digital technology—supposing, that is, you did not move onto a tech-free country ranch, unplugged from the internet and every kind of screen—how many of these Big Tech companies could you extract yourself from without serious loss? Put from another angle, what is the fewest possible such companies you need to live your life?
In my own life, I try to implement a modest version of this. I like to daydream, however, about a more radical version. Let me start with the former then turn to the latter.
In my own life, here’s my current entanglement with the Big Tech firms:
Meta: None whatsoever (I don’t have a Facebook or Instagram account), with the exception of WhatsApp, which is useful for international and other types of communication. Recently, though, I’ve been nudging those I talk to on WhatsApp to move to another app, so I could quit Zuckerberg altogether.
Microsoft: I use Word (a lot) and PowerPoint (some) and Excel (a bit). Though I’m used to all three, I could live without them—though I’d have two decades’ worth of Word files I’d need to archive and/or convert.
Google: I’ve had the same Gmail account for fifteen years, so it would be a real loss to give it up. I don’t use GoogleMaps or any other of Google’s smartphone apps. I use GoogleDocs (etc.) a bit, mostly when others want to collaborate; I avoid it, though, and would not miss it.
Amazon: I’m an Amazon originalist: I use it for books. We pay for Prime. We also use it to buy needs and gifts for our kids and others. For years I threw my body in front of purchasing an Alexa until my household outvoted me just this summer. Alas.
Apple: Here’s where they get me. I have an iPhone and a MacBook, and I finally gave in and started backing up with a paid account on iCloud. I use iPhoto and Messages and FaceTime and the rest. I’m sure my household will acquire an iPad at some point. In a word, I’m Apple-integrated.
Others: I don’t have TikTok or any other social media accounts. My household has a family Spotify account. I personally use Instapaper, Freedom, and Marco Polo. I got Venmo this summer, but I lived without it for a decade, and could delete it tomorrow. I use Dropbox as well as another online storage business. We have various streaming platforms, but they’ve been dwindling of late; we could live with one or two.
Caveat: I’m aware that digital entanglement takes more than one form, i.e., whether or not I have an Amazon or Gmail or Microsoft (or IBM!) “account,” I’m invariably interacting with, using, and possibly paying for their servers and services in a variety of ways without my even knowing it. Again, that sort of entanglement is unavoidable absent the (Butlerian/Benedictine) move to the wireless ranch compound. But I wanted to acknowledge my awareness of this predicament at least.
Okay. So what would it look like to minimize my formal Big Five “footprint”?
So far as I can see it, the answer is simple: Commit exclusively to one company for as many services as possible.
Now, this may be seriously unwise. Like a portfolio, one’s digital assets and services may be safest and best utilized when highly diversified. Moreover, it’s almost literally putting one’s eggs in a single basket: what if that basket breaks? What if the one company you trust goes bust, or has its security compromised, or finds itself more loyal to another country’s interests than one’s own, so on and so forth?
All granted. This may be a foolish endeavor. That’s why I’m thinking out loud.
But supposing it’s not foolish, it seems to me that the simplest thing to do, in my case, would be to double down on Apple. Apple does hardware and software. They do online storage. They do TV and movies. They do music and podcasts. They’re interoperable. They have Maps and email and word processors and slideshows and the rest—or, if I preferred, I could always use third-party software for such needs (for example, I already use Firefox, not Safari or Chrome).
So what would it take, in my situation, to reduce my Big Tech footprint from five toes to three or two or even just one?
First, delete WhatsApp. Farewell, Meta!
Second, switch to Keynote and TextEdit (or Pages or Scrivener) and some unknown spreadsheet alternative, or whatever other programs folks prefer. Adios, Microsoft!
Third, download my Gmail archive and create a new, private, encrypted account with a trusted service. Turn to DuckDuckGo with questions. Turn to Apple for directions. Avoid YouTube like the plague. Adieu, Google!
Fourth, cancel Prime, ditch the Alexa, use local outlets for shopping, and order books from Bookshop.org or IndieBound.org or directly from publishers and authors. Get thee behind me, Bezos!
Fifth and finally, pray to the ghost of Steve Jobs for mercy and beneficence as I enter his kingdom, a humble and obedient subject—bound for life…
Whether or not it would be wise, could I seriously do this? I’m sort of amazed at how not implausible it sounds. The hardest thing would be leaving Microsoft Word behind, just because I’ve never used anything else, and I write a lot. The second hardest would be losing the speed, cheapness, and convenience of Amazon Prime for ordering books—but then, that’s the decision that would be best for my soul, and for authors, and for the publishing industry in general. As for life without Gmail, that would be good all around, which is why it’s the step I’m most likely to follow in the next few years.
In any case, it’s a useful exercise. “We” may “need” these corporations, at least if we want to keep living digital lives. But we don’t need all of them. We may even not need more than one.
Tech bubble
From what I read online, I appear to live in a tech bubble: everyone’s addicted to it while knowing it’s bad. Are there really people who aren’t addicted? Are there really others who are addicted, but think it’s good?
Lately it’s occurred to me that I must live in an odd sort of tech bubble. It has two components.
On one hand, no one in my context (a medium-sized city in west Texas) lives in any way “free” from digital technology. Everyone has smartphones, laptops, tablets, and televisions with streaming apps. Most little kids have Kindles or iPads; most 10-12-years olds have phones; nearly every middle school has a smartphone. Women are on Instagram and TikTok, men are on Twitter and YouTube. Boys of every age play video games (Switch, XBOX, PS5), including plenty of dads. Adults and kids alike are on their phones during church, during sporting events, during choir performances. Kids watch Disney+ and PBS Kids; parents watch Max and Netflix. Screens and apps, Amazon and Spotify, phones and tablets galore: this is just daily ordinary life. There are no radicals among us. No slices of life carved out. I don’t know anyone without a TV, much less without wireless internet. I don’t know anyone without a smartphone! Life in west Texas—and everywhere else I’m aware of, at least in the Bible Belt—is just like this. No dissenting communes. No screen-free spaces. I’m the campus weirdo for not permitting devices in my classroom, and doubly so for not using a Learning Management System. Nor am I some hard-edged radical. I’m currently typing on a MacBook, and when I leave my office, I’ll listen to an audiobook via my iPhone.
In other words, whenever anyone tells me that the world I’ve just described isn’t normal, isn’t typical, isn’t entrenched and established and nigh unavoidable—I think, “Okay, we simply live in different worlds. I’d like to come tour yours. I’ve not seen it with my own eyes before.” I’m open to being wrong. But I admit to some measure of skepticism. In a nation of 330 million souls, is it meaningful to point to this or that solitary digital experimenter as a site of resistance? And won’t they capitulate eventually anyway?
But maybe not. What do I know?
Here’s the other hand, though. Everyone I know, tech-addled and tech-saturated though they be, everyone agrees that digital technology and social media are a major problem, perhaps the most significant social challenge, facing all of us and especially young people today. No one thinks it’s “no big deal.” No one argues that their kids vegging out on video games all day does nothing to their brains. No one pretends that Instagram and TikTok and Twitter are good for developing adolescents. No one supposes that more screen time is better for anyone. They—we—all know it’s a problem. They—we—just aren’t sure what to do about it. And since it seems such an enormously complex and massive overarching matrix, by definition a systemic problem calling for systemic solutions, mostly everyone just keeps on with life as it is. A few of us try to do a little better: quantifying our kids’ screen time; deleting certain apps; resisting the siren song of smartphones for 12-year-olds. But those are drops in the bucket. No one disputes the nature or extent of the problem. It’s just that no one knows how to fix it; or at least no one has the resolve to be the one person, the one household, in a city of 120,000 to say No! to the whole shebang. And even if there were such a person or household, they’d be a one of one. An extraordinary exception to the normative and unthreatened rule.
And yet. When I read online, I discover that there are people—apparently not insignificant in number?—who do not take for granted that the ubiquity and widespread use of social media, screens, and personal devices (by everyone, but certainly by young people) is a bad thing. In fact, these people rise in defense of Silicon Valley’s holy products, so much so that they accuse those of us worried about them of fostering a moral panic. Any and all evidence of the detrimental effects of teenagers being online four, six, eight hours per day is discounted in advance. It’s either bad data or bad politics. Until very recently I didn’t even realize, naive simpleton that I am, that worrying about these things was politicized. That apparently you out yourself as a reactionary if … checks notes … you aren’t perfectly aligned with the interests of trillion-dollar multinational corporations. That it’s somehow right-wing, rather than common-sense, to want children and young people to move their bodies, to be outdoors, to talk to one another face to face, to go on dates, to get driver’s licenses, to take road trips, to see concerts, to star gaze, to sneak out at night(!), to go to restaurants, to go to parks, to go on walks, to read novels they hold in their hands, to look people in the eye, to play the guitar, to go camping, to visit national parks, to play pick-up basketball, to mow the yard, to join a protest march, to tend a garden, to cook a meal, to paint, to leave the confines of their bedrooms and game rooms, to go to church, to go on a picnic, to have a first kiss—must I go on? No, because everyone knows these are reasonable things to want young people to do, and to learn to do, and even (because there is no other way) to make mistakes and take real risks in trying to learn to do. I know plenty of conservatives and plenty of progressives and all of them, not an exception among them, want their kids off social media, off streaming, off smartphones—on them, at a minimum, much less—and want them instead to do something, anything, out there in the bright blue real world we all share and live in together.
I must allow the possibility, however, that I inhabit a tech bubble. There appear to be other worlds out there. The internet says so. In some of them, I’m told, there are tech-free persons, households, and whole communities enjoying life without the tyrannous glare of the Big Five Big Brother staring back at them through their devices. And in other worlds, running parallel to these perhaps, tech is as omnipresent as it is in my neck of the woods, yet it is utterly benign, liberating, life-giving, and above all enhancing of young people’s mental health. The more screens the better, in that world. To know this is to be right-thinking, which is to say, left-thinking: enlightened and progressive and educated. To deny it is right-thinking in the wrong sense: conservative and benighted and backwards.
Oh, well. Perhaps I’ll visit one of these other worlds someday. For the time being, I’m stuck in mine.
Substack vs. blogging
What is the effect of Substack on writing? Is it the same as blogging, or are the two distinct forms? A post on why I’m still blogging, instead of writing for Substack.
Everyone has a Substack these days. Is that a good thing?
I’ve toyed with starting my own this past year. But I keep pulling up short. Here’s why.
Blogging is its own form at this point. It isn’t an essay. Nor is it a scholarly article. It has no length requirements: a blog post can be a sentence, a paragraph, 500 words, twice that, or twenty times that. Neither does blogging come with expectations of frequency. Some folks blog daily; others multiple times a day; others twice a week; others unpredictably, as a kind of clearinghouse for random ideas or thinking out loud.
Blogging is the shaggy dog of internet writing. It’s playful, experimental, occasional, topical, provisional, personal, tentative. It is inexpert, even when written by experts. It is off the cuff, even when polished and thought through.
And it is conversational, at its origins and in its form. It’s constantly linking, talking, referring, thinking out loud by bouncing ideas off of other ideas, typically found on other blogs. The so-called “blogosphere” really was a marvelous time to be on the internet. And it never died, though it shrunk, and its spirit lives on in various ways.
I understand how and why certain Substackers treat their new medium as a kind of Blog 2.0. But I don’t think it qualifies. Whatever Substack (along with its many peers and predecessors) is, and even if it is here to stay, it isn’t blogging.
I’m gratified to see Substack succeed, and I think all writers (and readers) should be grateful for its continued viability. The larger ecology of writers and writing has benefited tremendously from it. Writers are getting paid for their labor. Niche subjects are being funded by committed readers. I have more than a few friends who are finding and growing a readership by means of Substack. I subscribe to at least one or two dozen Substacks, and some of them make for essential reading.
But it’s not a blogging replacement, a blog-killer. And inasmuch as writers flock to Substack as the one real option today, I do think it has a certain distorting effect. For what Substack does is impose a certain discipline on its writers. The writing becomes formal, focused, and routinized. Writers produce at least one entry weekly, but typically two to three (or more) per week. Series abound. The word count is expansive. Not only are editors few and far between; quality, as in GRE grading, is tied to length. After all, it’s hard to justify subscriptions without regular, “meaty” posts.
Put charitably, it’s as though everyone is now a writer for Harper’s or First Things or the NYRB or NYT Magazine, only they produce copy at an outrageous rate. Put less charitably, the Substack-ification of writing makes an op-ed columnist out of everyone—except twice or thrice as prolific, minus the journalistic chops, and lacking in editorial oversight.
Hear me say: The net effect of Substack remains good. I don’t want it to go away. What I want is a diverse publishing environment, which includes but is not limited to Substack. Sure, it would be nice to lock in a concrete number of readers who chose to sign up for my newsletter. It would be lovely to start making one or two hundred bucks per month for my writing. It would be reassuring to my sensitive writer’s soul to know that, when I press “publish,” the post I’ve just crafted wasn’t just being launched into the void but into 50 or 500 or 1500 (or more!) inboxes the world over.
But my writing would suffer. I’d start writing like a Substacker. And I don’t want to write like that. I want to write books, journal articles, and magazine essays. And when I’m not doing that, I want to blog.
Kudos, then, and all good things to friends and colleagues who’ve found a home and an audience on Substack. As for me and my writing, however, we’ll stay here, on my own turf.
The take temptation
There is an ongoing series of essays being slowly published in successive issues of The New Atlantis I want to commend to you. They’re by Jon Askonas, a friend who teaches politics at Catholic University of America. The title for the series as a whole is “Reality: A Post-Mortem.” The essays are a bit hard to describe, but they make for essential reading.
There is an ongoing series of essays being slowly published in successive issues of The New Atlantis I want to commend to you. They’re by Jon Askonas, a friend who teaches politics at Catholic University of America. The title for the series as a whole is “Reality: A Post-Mortem.” The essays are a bit hard to describe, but they make for essential reading. They are an attempt to diagnose the root causes of, and the essential character of, the new state of unreality we find ourselves inhabiting today. The first, brief essay lays out the vision for the series. The second treats the gamified nature of our common life, in particular its analogues in novels, role-playing games, and alternate reality games (ARGs). The latest essay, which just arrived in my mailbox, is called “How Stewart Made Tucker.” Go read them all! (And subscribe to TNA, naturally. I’ve got an essay in the latest issue too.)
For now, I want to make one observation, drawing on something found in essay #2.
Jon writes (in one of a sequence of interludes that interrupt the main flow of the argument):
Several weeks have gone by since you picked your rabbit hole [that is, a specific topic about which there is much chatter but also much nonsense in public discourse and social media]. You have done the research, found a newsletter dedicated to unraveling the story, subscribed to a terrific outlet or podcast, and have learned to recognize widespread falsehoods on the subject. If your uncle happens to mention the subject next Thanksgiving, there is so much you could tell him that he wasn’t aware of.
You check your feed and see that a prominent influencer has posted something that seems revealingly dishonest about your subject of choice. You have, at the tip of your fingers, the hottest and funniest take you have ever taken.
1. What do you do?
a. Post with such fervor that your followers shower you with shares before calling Internet 911 to report an online murder.
b. Draft your post, decide to “check” the “facts,” realize the controversy is more complex than you thought, and lose track of real work while trying to shoehorn your original take into the realm of objectivity.
c. Private-message your take, without checking its veracity, to close friends for the laughs or catharsis.
d. Consign your glorious take to the post trash can.
2. How many seconds did it take you to decide?
3. In however small a way, did your action nudge the world toward or away from a shared reality?
Let’s call this gamified reinforcement mechanism “the take temptation.” It amounts to the meme-ification of our common life and, therefore, of the common good itself. Jon writes earlier in the essay, redescribing the problem behind the problem:
We hear that online life has fragmented our “information ecosystem,” that this breakup has been accelerated by social division, and vice versa. We hear that alienation drives young men to become radicalized on Gab and 4chan. We hear that people who feel that society has left them behind find consolation in QAnon or in anti-vax Facebook groups. We hear about the alone-togetherness of this all.
What we haven’t figured out how to make sense of yet is the fun that many Americans act like they’re having with the national fracture.
Take a moment to reflect on the feeling you get when you see a headline, factoid, or meme that is so perfect, that so neatly addresses some burning controversy or narrative, that you feel compelled to share it. If it seems too good to be true, maybe you’ll pull up Snopes and check it first. But you probably won’t. And even if you do, how much will it really help? Everyone else will spread it anyway. Whether you retweet it or just email it to a friend, the end effect on your network of like-minded contacts — on who believes what — will be the same.
“Confirmation bias” names the idea that people are more likely to believe things that confirm what they already believe. But it does not explain the emotional relish we feel, the sheer delight when something in line with our deepest feelings about the state of the world, something so perfect, comes before us. Those feelings have a lot in common with how we feel when our sports team scores a point or when a dice roll goes our way in a board game.
It’s the relish of the meme, the fun of the hot take—all while the world burns—that Jon wants us to see so that he, in turn, can explain it. I leave the explanation to him. For my part, I’m going to do a bit of moralizing, aimed at myself first but offered here as a bit of stern encouragement to anyone who’s apt to listen.
The moral is simple: The take temptation is to be resisted at all costs, full stop. The take-industrial complex is not a bit of fun at the expense of others. It’s not a victimless joke. It is nothing less than your or my small but willing participation in unraveling the social fabric. It is the false catharsis that comes from treating the goods in common we hope to share as a game, to be won or lost by cheap jokes and glib asides. Nor does it matter if you reserve the take or meme for like-minded friends. In a sense that’s worse. The tribe is thereby reinforced and the Other thereby rendered further, stranger, more alien than before. You’re still perpetuating the habit to which we’re all addicted and from which we all need deliverance. You’re still feeding the beast. You’re still heeding the sly voice of the tempter, whose every word is a lie.
The only alternative to the take temptation is the absolutely uncool, unrewarding, and unremunerative practice of charity for enemies, generosity of spirit, plainness of prose, and perfect earnestness in argument. The lack of irony is painful, I know; the lack of sarcasm, boring; the lack of grievance, pitiful. So be it. Begin to heal the earth by refusing to litter; don’t wish the world rid of litter while tossing a Coke can out the window.
This means not reveling in the losses of your enemies, which is to say, those friends and neighbors for whom Christ died with whom you disagree. It means not joking about that denomination’s woes. It means not exaggerating or misrepresenting the views of another person, no matter what they believe, no matter their character, no matter who they are. It means not pretending that anyone is beyond the pale. It means not ridiculing anyone, ever, for any reason. It means, practically speaking, not posting a single word to Twitter, Instagram, Facebook, or any other instrument of our digital commons’ escalating fracture. It means practicing what you already know to be true, which is that ninety-nine times out of one hundred, the world doesn’t need to know what you think, when you think it, by online means.
The task feels nigh impossible. But resistance isn’t futile in this case. Every minor success counts. Start today. You won’t be sorry. Nor will the world.
Personal tech update
It’s been an unplanned, unofficial Tech Week here at the blog. I’ve been meaning to write a mini-update on my tech use—continuing previous reflections like these—so now seems as good a time as any.
It’s been an unplanned, unofficial Tech Week here at the blog. I’ve been meaning to write a mini-update on my tech use—continuing previous reflections like these—so now seems as good a time as any.
–I deactivated my Twitter account on Ash Wednesday, and I couldn’t be happier about the decision. It was a long time coming, but every time I came close to pulling the trigger I froze. There was always a reason to stay. Even Lent provided an escape hatch: my second book was being published right after Easter! How could I possibly hawk my wares—sorry, “promote my work in the public sphere”—if I wasn’t on Twitter? More to the point, does a writer even exist if he doesn’t have a Twitter profile? Well, it turns out he does, and is much the healthier for it. I got out pre–Elon Musk, too, which means I’ve been spared so much nonsense on the proverbial feed. For now, in any case, I’m keeping the account by reactivating then immediately deactivating it every 30 days; that may just be a sort of digital security blanket, though. Life without Twitter is good for the soul. Kempis and Bonhoeffer are right. Drop it like the bad habit that it is. Know freedom.
–I deleted my Facebook account two or three years ago, and I’ve never looked back. Good riddance.
–I’ve never had Instagram, TikTok, Snapchat, or any of the other nasty social media timesucks folks devote themselves to.
–For the last 3-4 years I’ve been part of a Slack for some like-minded/like-hearted Christian writers, and while the experience has been uniformly positive, I realized that it was colonizing my mind and thus my attention during the day, whether at work or at home. So, first, I set up two-factor authentication with my wife’s phone, which means I need her to give me access if I’m signed out; and, second, I began limiting my sign-ins to two or three Saturdays per month. After a few months the itch to be on and participate constantly in conversations has mostly dribbled away. Now I might jump on to answer someone’s question, but only for a few minutes, and not to “stay on” or keep up with all the conversations. I know folks for whom this isn’t an issue, but I’ve learned about myself, especially online, that it’s all or nothing. As with Twitter, I had to turn off the spout, or I would just keep on drinking until it made me sick.
–I don’t play video games, unless it’s a Mario Kart grand prix with my kiddos.
–I only occasionally use YouTube; nine times out of ten it’s to watch a movie trailer. I cannot relate to people, whether friends and students, who spend hours and hours on YouTube. I can barely watch a Zoom conversation for five minutes before needing to do something else with my time.
–I subscribe to Spotify, because it’s quality bang for your buck. I’d love to divest from it—as my friend Chris Krycho constantly abjures me to do—but I’m not sure how, should I want to have affordable, legal access to music (for myself as well as my family).
–I subscribe to Audible (along with Libby), because I gave up podcasts for audiobooks last September, a decision about which I remain ecstatic, and Audible is reasonably priced and well-stocked and convenient. If only it didn’t feed the Beast!
–I happily use Instapaper, which is the greatest app ever created. Hat tip to Alan Jacobs, from whom I learned about it in, I believe, his book Reading for Pleasure in an Age of Distraction. I’ve even paid to use the advanced version, and will do so again in the future if the company needs money to survive.
–I’ve dumbed down my iPhone as much as is in my power to do. I’ve turned off location services, the screen is in grayscale, and I’m unable to access my email (nor do I have my password memorized, so I can’t get to my inbox even if I’m tempted). I can call or text via Messages or WhatsApp. I have Audible, Spotify, and Instapaper downloaded. I use Marco Polo for friends and family who live far away. And that’s it. I aim to keep my daily phone usage to 45 minutes or so, but this year it’s been closer to 55-75 minutes on average.
–I use a MacBook Pro for work, writing, and other purposes; I don’t have an iPad or tablet of any kind. My laptop needs are minimal. I use the frumpy, clunky Office standbys: Word, Excel, PowerPoint. I’ve occasionally sampled or listened to pitches regarding the glories of alternatives to Word for writing, but honestly, for my needs, my habits, and my convenience, Word is adequate. As for internet browsing, I use Firefox and have only a few plug-ins: Feedly for an RSS reader, Instapaper, and Freedom (the second greatest app ever)—though I’ve found that I use Freedom less and less. Only when (a) I’m writing for 2-4 or more hours straight and (b) I’m finding myself distracted by the internet (but don’t need access to it); I pay to use it but may end up quitting if I find eventually that I’ve developed the ability to write without distraction for sustained periods of time.
–I’ve had a Gmail account since 2007; I daydream about deleting my Google account and signing up for some super-encrypted unsurveiled actually-private email service (again, Krycho has the recs), but so far I can’t find it within me to start from scratch and leave Gmail. We’ll see.
–I have the same dream about Amazon, which I use almost every day, order all my books from, have a Prime account with, and generally resent with secret pleasure (or enjoy with secret resentment). Divesting from Amazon seems more realistic than doing so from either Apple or Google, but then, how does anyone with a modest budget who needs oodles of books (or whatever) for their daily work purchase said books (or whatever) from any source but Amazon? That’s not a nut I’ve managed to crack just yet.
–I don’t have an Alexa or an Echo or an Apple Watch or, so far as I know, any species of the horrid genus “the internet of things.”
–In terms of TV and streaming services, currently my wife and I pay for subscriptions with … no platforms, unless I’m mistaken. At least, we are the sole proprietors of none. On our Roku we have available Netflix, Prime, Hulu, Disney+, Apple+, HBO Max, and YouTubeTV. But one of these is free with our cellular service (Hulu), two of them are someone else’s account (Apple+ and YouTubeTV), and another is a byproduct of free shipping (Prime). We pay a nominal fee as part of extended family/friend groups for Netflix and HBO, and honestly we could stop tomorrow and we’d barely notice. We paid a tiny fee up front for three years of Disney+, and if we could have only one streaming service going forward, that’s what we’d keep: it has the best combination of kids, family, classic, and grown-up selections, and you can always borrow a friend’s password or pay one month’s cost to watch a favorite/new series/season before canceling once it’s over. As for time spent, across a semester I probably average 3-7 hours of TV per week. I’ve stopped watching sports altogether, and I limit shows to either (a) hands-down excellence (Better Call Saul, Atlanta, Mare of Easttown), (b) family entertainment (basically, Marvel and Star Wars), or (c) undemanding spouse-friendly fare (Superstore, Brooklyn 99, Top Chef). With less time during the school year, I actually end up watching more TV, because I’m usually wiped by the daily grind; whereas during the summer, with much more leisure time, I end up reading or doing other more meaningful things. I will watch the NBA playoffs once grades are submitted, but then, that’s nice to put on in the background, and the kids enjoy having it on, too.
–Per Andy C.’s tech-wise advice, we turn screens off on Sundays as a general rule. We keep an eye on screen time for the kids Monday through Thursday, and don’t worry about it as much on Friday and Saturday, especially since outdoor and family and friend activities should be happening on those days anyway.
–Oddly enough, I made it a goal in January of this year to watch more movies in 2022. Not only am I persuaded that, my comparison to television, film is the superior art form, and that the so-called golden age of peak TV is mostly a misnomer, I regret having lost the time—what with bustling kids and being gainfully employed—to keep up with quality movies. What time I do have to watch stuff I usually give to TV, being the less demanding medium: it’s bite size, it always resolves (or ends on a cliffhanger), and it doesn’t require committing to 2-3 hours up front. I’ve mostly not been successful this year, but I’m hoping the summer can kickstart my hopes in that area.
–If I’m honest, I find that I’ve mostly found a tolerable equilibrium with big-picture technology decisions, at least on an individual level. If you told me that, in two years, I no longer used Amazon, watched even less TV, and traded in my iPhone for a flip phone, I’d be elated. Otherwise, my goals are modest. Mainly it has to do with time allocation and distraction at work. If I begin my day with a devotional and 2-4 hours of sustained reading all prior to opening my laptop to check email, then it’s a good day. If the laptop is opened and unread mail awaits in the inbox, it’s usually a waste of a day. The screen sucks me in and the “deep work” I’d hoped to accomplish goes down the drain. That may not be how it goes for others, but that’s how it is with me.
–The only other tech-related facet of my life I’m pondering is purchasing a Kobo Elipsa (again, on the recommendation of Krycho and some other tech-wise readerly types). I’m not an especially good reader of PDFs; usually I print them out and physically annotate them. But it would be nice to have a reliable workflow with digital files, digital annotations, and searchable digital organization thereof. It would also help with e-reading—I own a 10-year old Kindle but basically never use it—not only PDFs for work but writings in the public domain, ePub versions of new books I don’t need a physical copy of (or perhaps can only get a digital version of, for example, via the library), and Instapaper-saved articles from online sources. I’ve never wanted a normal tablet for this purpose because I know I’d just be duped into browsing the web or checking Twitter or my inbox. But if Kobo is an ideal balance between a Kindle and an iPad, designed for the sole purpose for which I need it, then I may end up investing in it here in the next year or two.
The meta mafia
Here’s something Mark Zuckerberg said during his indescribably weird announcement video hawking Facebook’s shift into the so-called metaverse: There are going to be new ways of interacting with devices that are much more natural. Instead of typing or tapping, you’re going to be able to gesture with your hands, say a few words, or even just make things happen by thinking about them. Your devices won’t be the focal point of your attention anymore. Instead of getting in the way, they’re going to give you a sense of presence in the new experiences that you’re having and the people who you’re with. And these are some of the basic concepts for the metaverse.
Here’s something Mark Zuckerberg said during his indescribably weird announcement video hawking Facebook’s shift into the so-called metaverse:
There are going to be new ways of interacting with devices that are much more natural. Instead of typing or tapping, you’re going to be able to gesture with your hands, say a few words, or even just make things happen by thinking about them. Your devices won’t be the focal point of your attention anymore. Instead of getting in the way, they’re going to give you a sense of presence in the new experiences that you’re having and the people who you’re with. And these are some of the basic concepts for the metaverse.
There are many things to say about this little snippet—not to mention the rest of his address. (I’m eagerly awaiting the 10,000-word blog-disquisition on the Zuck’s repeated reference to “presence” by contradistinction to real presence.) But here’s the main thing I want to home in on.
In our present practice, per the CEO of Facebook-cum-Meta, devices have become the “focal point” of our “attention.” Whereas in the future being fashioned by Meta, our devices will no longer “get in the way” of our “presence” to one another. Instead, those pesky devices now out of the way, the obstacles will thereby be cleared for the metaverse to facilitate, mediate, and enhance the “sense of presence” we’ve lost in the device-and-platform obsessions of yesteryear.
Now suppose, as a thought experiment, that an anti–Silicon Valley Luddite wanted to craft a statement and put it into the mouthpiece of someone who stands for all that’s wrong with the digital age and our new digital overlords—a statement rife with subtle meanings legible only to the initiated, crying out for subtextual Straussian interpretation. Would one word be different in Zuckerberg’s script?
To wit: a dialogue.
*
Question: Who introduced those ubiquitous devices, studded with social media apps, that now suck up all our attention, robbing us of our sense of presence?
Answer: Steve Jobs, in cahoots with Mark Zuckerberg, circa 15 years ago.
Question: And what is the solution to the problem posed by those devices—a problem ranging from partisan polarization to teen addiction to screens to social anomie to body image issues to political unrest to reduced attention spans to diffuse persistent low-grade anxiety to massive efforts at both disinformation and censorship—a problem, recall, introduced by (among others) Mr. Zuckerberg, which absorbs and deprives us all, especially young people, of the presence and focus necessary to inhabit and navigate the real world?
Answer: The metaverse.
Question: And what is that?
Answer: An all-encompassing digital “world” available via virtual reality headsets, in which people, including young people, may “hang out” and “socialize” (and “teleport” and, what is the Zuck’s go-to weasel word, “connect”) for hours on end with “avatars” of strangers that look like nothing so much as gooey Minecraft simulacra of human beings.
Question: And who, pray tell, is the creator of the metaverse?
Answer: Why, the creator of Facebook: Mark Zuckerberg, CEO of Meta.
Question: So the problem with our devices is not that they absorb our attention, but that they fall short of absolute absorption?
Answer: Exactly!
Question: So the way the metaverse resolves the problem of our devices’ being the focal point of our attention is by pulling us into the devices in order to live inside them? So that they are no longer “between” or “before” us but around us?
Answer: Yes! You’re getting closer.
Question: Which means the only solution to the problem of our devices is more and better devices?
Answer: Yes! Yes! You’re almost there!
Question: Which means the author of the problem is the author of the solution?
Answer: You’ve got it. You’re there.
Question: In sum, although the short-sighted and foolish-minded among us might suppose that the “obvious solution” to the problem of Facebook “is to get rid of Facebook,” the wise masters of the Valley propose the opposite: “Get rid of the world”?
Answer: This is the way. You’ve arrived. Now you love Big Zuckerberg.
*
Lest there be any doubt, there is a name for this hustle. It’s a racket. When a tough guy smashes the windows of your shop, after which one of his friends comes by proposing to protect you from local rowdy types, only (it goes without saying) he requires a weekly payment under the table, you know what’s happening. You’re not grateful; you don’t welcome the proposal. It’s the mafia. And mob protection is fake protection. It’s not the real thing. The threat and the fix wear the same face. Maybe you aren’t in a position to decline, but either way you’re made to be a fool. Because the humiliation is the point.
Mark Zuckerberg is looking us in the eyes and offering to sell us heroin to wean us off the cocaine he sold us last year.
It’s a sham; it’s a racket; it’s the mob; it’s a dealer.
Just say no, y’all.
Just. Say. No.
Quit podcasts
“Quit Netflix.” Matt Anderson has made that phrase something of a personal slogan. Among the many controversial things he’s written—he’s a controversialist, after all—it may be the most controversial. We like our shows in the age of Peak TV. And what he wants to do is take them away.
Matt Anderson has made that phrase something of a personal slogan. Among the many controversial things he’s written—he’s a controversialist, after all—it may be the most controversial. We like our shows in the age of Peak TV. And what he wants to do is take them away.
Okay, not exactly. But he thinks you—all of us—would be better off if we canceled our Netflix subscriptions. For the obvious reasons: Netflix is a candy bar, a sedative, a numbing device by which we pass the time by doing nothing especially worthwhile at all. It turns out that’s true not only of Netflix but of the so-called Golden Age of TV as a whole. It’s not that there are no good TV shows. It’s that, in almost every circumstance, there are dozens of better things you could be doing with your time. (You might be inclined to resist this claim, but in your heart you know it’s inarguable.) Add to that the fact that watching Prestige TV has become, in the last dozen years, a vaguely moralized social obligation for a certain subset of upper-middle class white-collar professionals, and perhaps Matt is right that Christians not only may but should quit Netflix.
Point taken. Now allow me to swap out a different activity for your quitting consideration.
Podcasts.
Podcasts, as you well know, are the new Prestige TV. They’re ubiquitous. Not only does everyone have a favorite podcast. Everyone has a podcast, i.e., hosts one of their own. Or is starting one. Or is planning to. Or has an idea for one. They just need to get the equipment and line up some guests . . .
I live and work right next to a college campus. If you see someone walking on campus and that person is under 40 and alone, almost certainly she has air pods in her ears, and chances are those air pods are playing a podcast. (Maybe music. Maybe.) What is the podcast? Who knows? There are literally millions today, on every topic under the sun. “Have you listened to [X] podcast?” is the new “Have you seen the latest episode of [X]?” Just last month our pediatrician asked me, in the middle of a check-up for one of our kids, given my job, whether I was listening to the Mars Hill podcast. Alas, I had to say no.
Now, this post is two-thirds troll, one-third sincere. I’ve been listening to podcasts for nearly 15 years. My first was Bill Simmons’ old pod for ESPN, whose first episode dropped while I was living in an apartment in Tomsk, Russia, early in the summer of 2007. I’ve been listening to Simmons off and on ever since. My podroll has increased as I’ve aged, and some of my typical listens are among the usual suspects: Ezra Klein, The Argument, Zach Lowe, The Watch, The Rewatchables, Know Your Enemy, LR&C, Tyler Cowen, Mere Fidelity, The Editors, a few others here and there. Washing the dishes or cleaning the house, it’s a pleasure to listen to these folks talk about sports and entertainment and news and politics and theology. It’s background noise, their voices become like those of friends, and occasionally you even learn things.
So unlike the scourge of Prestige TV—which is little more than a Trojan horse for reinforcing the single greatest collective habitual addiction besetting our society for nearly a century—podcasts aren’t All Bad, nor are their benefits mainly a function of rationalization and self-justification. I’m not worried about them in the same way.
Having said that. Let me suggest a few reasons why you ought to be a little more skeptical of them. So as to decrease your podcasting, and maybe even to quit it.
First, podcasts are filler. They’re aural wallpaper. They’re something to have on in the background while you do something else, something that requires your actual focus and attention. If that’s true, can they really be that substantial? Aren’t they, as often as not, little more than snack food for the ears?
Second, if you really want to listen to something (say, on a road trip or a long walk or while working out), why not listen to an audiobook? Ninety-nine times out of a hundred, the book will be worth your time—a four-course meal—in a way the Cheetos-posing-as-Michelin-starred-podcast will not. You could buy audiobooks or sign up for Audible, as a way to patronize authors, or you could use Overdrive or Libby. You have more or less every great work of literature and prose in the English language available to listen to for free; sometimes the work was composed with the express purpose of hearing it read aloud, rather than reading words on a page. Why not give it a try?
Third, those first two points suppose it’s not a problem, us walking around incessantly filling our ears with the voices of others, thus blocking out the noises—and silences—of the real world. Isn’t it a problem, though, this anxious need to fill even the smallest of snatches of time with meaningful noise, lest we be oppressed by the stillness and quietness (or, if you’re in a big city, the parading loudness) of life? Or perhaps we feel anxious to Do Something Productive with our time: If our attention can manage it, surely we Ought To Be Learning/Listening/Thinking? Nonsense. If to cook is to pray, so is every other daily “mindless” habitual activity that doesn’t demand the totality of our attention. Such activity may, in fact, permit our attention to be at ease, or to meditate on other matters, or to examine our days, or to wander as it pleases. Or, as the case may be, to choke on emotions we’d rather not address, indeed would rather numb and sedate and repress through unremitting distraction. Perhaps podcasts are a kind of noise pollution but on an individual level, self-chosen rather than imposed from without. We just have to refuse the urge to put the pods in and press play.
Fourth, podcasts almost invariably trade on the new, the latest, the exclusive-breaking-this-just-in-ness of our forgetful presentist age. In this they’re analogous to Twitter: an infinite scroll, not for the eyes, but for the ears. Doubtless some people listen to podcasts while scrolling Twitter. (The horror!) The podcasts play on, world without end, one blending into another, until you forget where one begins and one ends. Of all the podcasts you’ve ever listened to—and I’ve surely listened to thousands—how many discrete episodes can you point to, from memory, and say, “That one, right there, was significant, a meaningful and substantive and life-giving episode”? I’m not saying you couldn’t pick out a few. I’m suggesting the batting average will be very low. Again, like remembering individual tweets. That’s why podcasts are so disposable. The moment they lose their immediate relevance, they are cast aside into the dustbin of history. It’s what makes writers who become podcasters so sad. Books and essays and columns stand the test of time. Pods do not. Bill Simmons, whom I referenced earlier, stopped writing five or six years ago. He likes to say his fingers stopped working. The truth is, a combination of market inefficiency plus the convenience of podcasting meant taking the time to draft and revise and draft and revise, under an editor’s watchful eye, was less convenient and more time-demanding than hopping onto a podcast seconds after a game ended—plus advertisers are willing to pay for that in a way they don’t for individual columns. So a writer who came onto the scene and made a name for himself because of his writing simply ceased to practice his craft. That’s something to lament. Beyond that individual case, though, it’s a parable for our time. And Simmons is someone with an audience in the millions. Yet his thousands upon thousands of podcasts from the last decade will never be listened to a second time, now or in the future. They might as well be lit on fire ten days after going online. The same goes for politics podcasts. They’re talk radio, only rarefied and highbrow. But they have the same shelf life. And they partake of the selfsame contemporary obsession with The New that all people of good will, but certainly Christians (and Jews and Muslims), should repudiate in all its forms. Go read Rolf Dobelli’s Stop Reading the News and you’ll realize just how unimportant—in general and to your own daily life—“keeping up with the news” is. That goes for politics and sports and entertainment (but I repeat myself) as much as for anything else. Stop reading the news translates to stop listening to the news, which I will gloss here as stop listening to podcasts devoted to The New.
Fifth, podcasts have increasingly become niche and personalized, as so much in our digital economy has. You, the individual consumer-listener, pay the individual content-maker/podcaster, perhaps become a Patreon supporter, and thereby receive Exclusive Access and Personal Benefits and other just-for-you paid-for goods and services. Am I the only one who finds all of this ever so slightly weird, even gross? I don’t begrudge anyone hustling to do their thing or to find an audience, precisely outside of the decaying and desiccated institutions that act as gatekeepers today. But there’s something icky about it nonetheless. In the same way that news-watchers can exist in entirely different moral and epistemic universes—one presided over and mediated by Fox News, the other by MSNBC—so podcast-listeners curate their own little private aural worlds with nary a glance at or interruption from another. It doesn’t help that this ecosystem (or ecosphere?) overlaps substantially with the gig-cum-influencer economy, where fame and fortune are always one viral moment away, for anyone and everyone. We’re all always already potentially (in)famous and affluent, if only the digital stars will align. We try to nudge those stars by flooding the market with our content, a sort of astrology or spell-conjuring with ones and zeroes, or in this case, “Thanks for following; while you’re here, check out my SoundCloud.”
In any case, those are at least some of the reasons for increasing your skepticism quotient in this matter. More than a slightly more skeptical eye, though, consider whether you ought to go all the way. For there’s a solution lying close to hand.
Quit podcasts.
Charity
What if, when a person you are reading or listening to states a conviction or comes to a conclusion with which you disagree, your first thought were not that such a person must, by necessary consequence, be wicked, stupid, cruel, incurious, unserious, or otherwise worthy of public censure and ridicule?
What if, when a person you are reading or listening to states a conviction or comes to a conclusion with which you disagree, your first thought were not that such a person must, by necessary consequence, be wicked, stupid, cruel, incurious, unserious, or otherwise worthy of public censure and ridicule?
What if, when disagreement obtains between persons or groups, we understood that disagreement to be neither absolute nor permanent nor exclusive of friendship, neighborliness, mutual respect, and generosity?
What would happen if we all acted on what we already know to be true, namely, that social media—Twitter above all—is inhospitable to reasoned discourse and charitable interpretation? that it is not a sounding board for honest reflection but a storehouse of mental waste, emotional disquiet, and psychic poison? that irony and mockery are not bugs accidental to the system, but features endemic to it? that every second spent on it is invariably a malformation of one’s mind, heart, soul, and habits of attention? that the only worthwhile thing to do with Twitter et al is not “be a better user” but blast it into the sun?
From meaningless content to doomscrolling
One of the truly essential Substack writers is Justin E. H. Smith, who is neither a journalist nor a start-up freelancer but a major academic philosopher and polymath scholar of (what always strikes me as) ten thousand interesting things. His newsletter from two Sundays ago was a typically undefinable reflection on (inter alia) memory, streaming, tense, eternity, and the internet.
One of the truly essential Substack writers is Justin E. H. Smith, who is neither a journalist nor a start-up freelancer but a major academic philosopher and polymath scholar of (what always strikes me as) ten thousand interesting things. His newsletter from two Sundays ago was a typically undefinable reflection on (inter alia) memory, streaming, tense, eternity, and the internet. Here are some sample grafs that bring home one of the essay’s central points:
If this assessment sounds bleak or cynical, consider Amazon’s recent acquisition of MGM for $8.45 billion. Jeff Bezos now holds the rights to numerous treasures of twentieth-century American entertainment, not least Albert R. Broccoli’s almost boutique-style James Bond films with their iconic, mythos-incanting musical opening numbers. Bezos has explicitly stated his intention to “reimagine and redevelop that I.P. [sic] for the 21st century.” On the surface, his idea of what a “good plot” looks like would seem to make twenty-first century content scarcely different from the most archaic and deep-rooted elements of myth and lore. Thus he thinks there needs to be a heroic protagonist, a compelling antagonist, moral choices, civilizational high stakes, humor, betrayal, violence…
“I know what it takes to make a great show,” Bezos has confidently said, “this should not be that hard. All of these iconic shows have these basic things in common.” The problem is that Bezos’s purpose in returning to a quasi-Proppian schema of all possible storytelling is not at all to revive the incantatory power of cliché to move us into the ritual time of storytelling. It is rather to streamline and dynamicize the finished product, exactly as if it were shipping times Bezos were seeking to perfect, rather than the timing of a hero’s escape from a pit of conventional quicksand.
And so the college freshman imagining her life as a show seems doubly sad: she turns to the closest thing we have to new narrative art in order to frame her own life and make it meaningful, but the primary instances our culture yields up to her to help with this framing are in fact only meaningless content being passed off as narrative art. It is no wonder, then, that what she will likely end up doing, after the passing and briefly stimulating thought of life itself as a TV show, is to go back to doomscrolling and vain name-checking until sleep takes over.
Do go read the whole thing; the closing section is eloquent, incisive, and damning in equal parts. Then do your duty and subscribe.