Resident Theologian

About the Blog

Brad East Brad East

A.I., TikTok, and saying “I would prefer not to”

Finding wisdom in Bartleby for a tech-addled age.

Two technology pieces from last week have stuck with me.

Both were at The New York Times. The first was titled “How TikTok Changed America,” a sort of image/video essay about the platform’s popularity and influence in the U.S. The second was a podcast with Ezra Klein called “How Should I Be Using A.I. Right Now?,” an interview with Ethan Mollick.

To be clear, I skimmed the first and did not listen to the second; I only read Klein’s framing description for the pod (my emphases):

There’s something of a paradox that has defined my experience with artificial intelligence in this particular moment. It’s clear we’re witnessing the advent of a wildly powerful technology, one that could transform the economy and the way we think about art and creativity and the value of human work itself. At the same time, I can’t for the life of me figure out how to use it in my own day-to-day job.

So I wanted to understand what I’m missing and get some tips for how I could incorporate A.I. better into my life right now. And Ethan Mollick is the perfect guide…

This conversation covers the basics, including which chatbot to choose and techniques for how to get the most useful results. But the conversation goes far beyond that, too — to some of the strange, delightful and slightly unnerving ways that A.I. responds to us, and how you’ll get more out of any chatbot if you think of it as a relationship rather than a tool.

These two pieces brought to mind two things I’ve written recently about social media and digital technology more broadly. The first comes from my New Atlantic essay, published two years ago, reviewing Andy Crouch’s book The Life We’re Looking For (my emphases again):

What we need is a recommitment to public argument about purpose, both ours and that of our tools. What we need, further, is a recoupling of our beliefs about the one to our beliefs about the other. What we need, finally, is the resolve to make hard decisions about our technologies. If an invention does not serve the human good, then we should neither sell it nor use it, and we should make a public case against it. If we can’t do that — if we lack the will or fortitude to say, with Bartleby, We would prefer not to — then it is clear that we are no longer makers or users. We are being used and remade.

The other comes late in my Commonweal review, published last summer, of Tara Isabella Burton’s book Self Made:

It may feel to some of us that “everyone,” for example, is on Instagram. Only about 15 percent of the world is on the platform, however. That’s a lot of people. Yet the truth is that most of the world is not on it. The same goes for other social media. Influencer culture may be ubiquitous in the sense that most people between the ages of fifteen and thirty-five are affected by it in some way. But that’s a far cry from digitally mediated self-creation being a universal mandate.

Even for those of us on these apps, moreover, it’s possible to opt out. You don’t have to sell yourself on the internet. You really don’t. I would have liked Burton to show us why the dismal story she tells isn’t deterministic—why, for example, not every young woman is fated to sell her image on OnlyFans sooner or later.

The two relevant phrases from these essay reviews: You really don’t and Bartleby’s I would prefer not to. They are quite simply all you need in your toolkit for responding to new technologies like TikTok and generative A.I.

For example, the TikTok piece states that half of Americans are on the app. That’s a lot! Plenty to justify the NYT treatment. I don’t deny it. But do you know what that claim also means? That half of us aren’t on it. Fifty percent. One out of every two souls. Which is the more relevant statistic, then? Can I get a follow-up NYT essay about the half of us who not only aren’t tempted to download TikTok but actively reject it, can’t stand it, renounce it and all its pomp?

The piece goes further: “Even if you’ve never opened the app, you’ve lived in a culture that exists downstream of what happens there.” Again, I don’t deny it or doubt it. It’s true, to my chagrin. And yet, the power of such a claim is not quite what it seems on first glance.

The downstream-influence of TikTok works primarily if and as one is also or instead an active user of other social media platforms (as well as, perhaps, cable news programs focused on politics and entertainment). I’m told you can’t get on YouTube or Instagram or Twitter or Facebook without encountering “imported” content from TikTok, or “local” content that’s just Meta or Google cribbing on TikTok. But what if, like me, you don’t have an account on any of these platforms? What if you abstain completely from all social media? And what if you don’t watch Fox News or MSNBC or CNN or entertainment shows or reality TV?

I was prepared, reading the NYT piece, to discover all the ways TikTok had invaded my life without my even realizing it. It turns out, though, that I don’t get my news from TikTok, or my movie recommendations, or my cooking recipes, or my fashion advice(!), or my politics, or my Swiftie hits, or my mental health self-diagnoses, or my water bottle, or my nightly entertainment before bed—or anything else. Nothing. Nada. Apparently I have been immune to the fifteen “hottest trends” on TikTok, the way it invaded “all of our lives.”

How? Not because I made it a daily goal to avoid TikTok. Not because I’m a digital ascetic living on a compound free of wireless internet, smart phones, streaming TV, and (most important) Gen Z kiddos. No, it’s because, and more or less only because, I’m not on social media. Turns out it isn’t hard to get away from this stuff. You just don’t download it. You just don’t create an account. If you don’t, you can live as if it doesn’t exist, because for all intents and purposes, for your actual life, it doesn’t.

As I said: You really don’t have to, because you can just say I would prefer not to. All told, that’s enough. It’s adequate all on its own. No one is forcing you to do anything.

Which brings us to Ezra Klein.

Sometimes Klein seems like he genuinely “gets” the scale of the threat, the nature of the digital monstrosity, the power of these devices to shape and rewire our brains and habits and hearts. Yet other times he sounds like just another tech bro who wants to maximize his digital efficiencies, to get ahead of the masses, to get a silicon leg up on the competition, to be as early an adopter as possible. I honestly don’t get it. Does he really believe the hype? Or does he not. At least someone like Tyler Cowen picks a lane. Come join the alarmist train, Ezra! There’s plenty of room! All aboard!

Seriously though, I’m trying to understand the mindset of a person who asks aloud with complete sincerity, “How should I incorporate A.I. into my life ‘better’?” It’s the “should” that gets me. Somehow this is simultaneously a social obligation and a moral duty. Whence the ought? Can someone draw a line for me from this particular “is” to Klein’s technological ought?

In any case, the question presumes at least two things. First, that prior to A.I. my life was somehow lacking. Second, that just because A.I. exists, I need to “find a place for it” in my daily habits.

But why? Why would we ever grant either of these premises?

My life wasn’t lacking anything before ChatGPT made its big splash. I wasn’t feeling an absence that Sam Altman could step in to fill. There is no Google-shaped hole in my heart. As a matter of fact, my life is already full enough: both in the happy sense that I have a fulfilling life and in the stressful sense that I have too much going on in my life. As John Mark Comer has rightly pointed out, the only way to have more of the former is through having less of the latter. Have more by having less; increase happiness by jettisoning junk, filler, hurry, hoarding, much-ness.

Am I really supposed to believe that A.I.—not to mention an A.I. duplicate of myself in order (hold gag reflex) to know myself more deeply (I said hold it!) in ways I couldn’t before—is not just one more damn thing to add to my already too-full life? That it holds the secrets of self-knowledge, maximal efficiency, work flow, work–life balance, relational intimacy, personal creativity, and labor productivity? Like, I’m supposed to type these words one after another and not snort laugh with derision but instead take them seriously, very seriously, pondering how my life was falling short until literally moments ago, when A.I. entered my life?

It goes without saying that, just because the technology exists, I don’t “need” to adopt or incorporate it into my life. There is no technological imperative, and if there were it wouldn’t be categorical. The mere existence of technology is neither self-justifying nor self-recommending. And must I add that devoting endless hours of time, energy, and attention to learning this latest invention, besides stealing those hours from other, infinitely more meaningful pursuits, will undeniably be superseded and almost immediately made redundant by the fact that this invention is nowhere near completion? Even if A.I. were going to improve daily individual human flourishing by a hundredfold, the best thing to do, right now, would be absolutely nothing. Give it another year or ten or fifty and they’ll iron out the kinks, I’m sure of it.

What this way of approaching A.I. has brought home to me is the unalterably religious dimension of technological innovation, and this in two respects. On one side, tech adepts and true believers approach innovation not only as one more glorious step in the march of progress but also as a kind of transcendent or spiritual moment in human growth. Hence the imperative. How should I incorporate this newfangled thing into my already tech-addled life? becomes not just a meaningful question but an urgent, obvious, and existential one.

On the other side, those of us who are members of actual religious traditions approach new technology with, at a minimum, an essentially skeptical eye. More to the point, we do not approach it expecting it to do anything for our actual well-being, in the sense of deep happiness or lasting satisfaction or final fulfillment or ultimate salvation. Technology can and does contribute to human flourishing but only in its earthly, temporal, or penultimate aspects. It has nothing to do with, cannot touch, never can and never will intersect with eternity, with the soul, with the Source and End of all things. Technology is not, in short, a means of communion with God. And for those of us (not all religious people, but many) who believe that God has himself already reached out to us, extending the promise and perhaps a partial taste of final beatitude, then it would never occur to us—it would present as laughably naive, foolish, silly, self-deceived, idolatrous—to suppose that some brand new man-made tool might fix what ails us; might right our wrongs; might make us happy, once and for all.

It’s this that’s at issue in the technological “ought”: the “religion of technology.” It’s why I can’t make heads of tails of stories or interviews like the ones I cited above. We belong to different religions. It may be that there are critical questions one can ask about mine. But at least I admit to belonging to one. And, if I’m being honest, mine has a defensible morality and metaphysics. If I weren’t a Christian, I’d rather be just about anything than a true believing techno-optimist. Of all religions on offer today, it is surely the most self-evidently false.

Read More
Brad East Brad East

Screentopia

A rant about the concern trolls who think the rest of us are too alarmist about children, screens, social media, and smartphones.

I’m grateful to Alan for writing this post so I didn’t have to. A few additional thoughts, though. (And by “a few thoughts” I mean rant imminent.)

Let me begin by giving a term to describe, not just smartphones or social media, but the entire ecosystem of the internet, ubiquitous screens, smartphones, and social media. We could call it Technopoly or the Matrix or just Digital. I’ll call it Screentopia. A place-that-is-no-place in which just about everything in our lives—friendship, education, finance, sex, news, entertainment, work, communication, worship—is mediated by omnipresent interlinked personal and public devices as well as screens of every size and type, through which we access the “all” of the aforementioned aspects of our common life.

Screentopia is an ecosystem, a habitat, an environment; it’s not one thing, and it didn’t arrive fully formed at a single point in time. It achieved a kind of comprehensive reach and maturity sometime in the last dozen years.

Like Alan, I’m utterly mystified by people who aren’t worried about this new social reality. Or who need the rest of us to calm down. Or who think the kids are all right. Or who think the kids aren’t all right, but nevertheless insist that the kids’ dis-ease has little to nothing to do with being born and raised in Screentopia. Or who must needs concern-troll those of us who are alarmed for being too alarmed; for ascribing monocausal agency to screens and smartphones when what we’re dealing with is complex, multicausal, inscrutable, and therefore impossible to fix. (The speed with which the writer adverts to “can’t roll back the clock” or “the toothpaste ain’t going back in the tube” is inversely proportional to how seriously you have to take him.)

After all, our concern troll asks insouciantly, aren’t we—shouldn’t we be—worried about other things, too? About low birth rates? And low marriage rates? And kids not playing outside? And kids presided over by low-flying helicopter parents? And kids not reading? And kids not dating or driving or experimenting with risky behaviors? And kids so sunk in lethargy that they can’t be bothered to do anything for themselves?

Well—yes! We should be worried about all that; we are worried about it. These aren’t independent phenomena about which we must parcel out percentages of our worry. It’s all interrelated! Nor is anyone—not one person—claiming a totality of causal explanatory power for the invention of the iPhone followed immediately by mass immiseration. Nor still is anyone denying that parents and teachers and schools and churches are the problem here. It’s not a “gotcha” to counter that kids don’t have an issue with phones, parents do. Yes! Duh! Exactly! We all do! Bonnie Kristian is absolutely right: parents want their elementary and middle school–aged kids to have smartphones; it’s them you have to convince, not the kids. We are the problem. We have to change. That’s literally what Haidt et al are saying. No one’s “blaming the kids.” We’re blaming what should have been the adults in the room—whether the board room, the PTA meeting, the faculty lounge, or the household. Having made a mistake in imposing this dystopia of screens on an unsuspecting generation, we would like, kindly and thank you please, to fix the problem we ourselves made (or, at least, woke up to, some of us, having not been given a vote at the time).

Here’s what I want to ask the tech concern trolls.

How many hours per day of private scrolling on a small glowing rectangle would concern you? How many hours per day indoors? How many hours per day on social media? How many hours per day on video games? How many pills to get to sleep? How many hours per night not sleeping? How many books per year not read? How many friends not made, how many driver’s licenses not acquired, how many dates and hangouts not held in person would finally raise a red flag?

Christopher Hitchens once wrote, “The North Korean state was born at about the same time that Nineteen Eighty-Four was published, and one could almost believe that the holy father of the state, Kim Il Sung, was given a copy of the novel and asked if he could make it work in practice.” A friend of mine says the same about our society and Brave New World. I expect people have read their Orwell. Have they read their Huxley, too? (And their Bradbury? And Walter M. Miller Jr.? And…?) Drugs and mindless entertainment to numb the emotions, babies engineered and produced in factories, sex and procreation absolutely severed, male and female locked in perpetual sedated combat, books either censored or an anachronistic bore, screens on every wall of one’s home featuring a kind of continuous interactive reality TV (as if Real Housewives, TikTok, and Zoom were combined into a single VR platform)—it’s all there. Is that the society we want? On purpose? It seems we’re bound for it like our lives depended on it. Indeed, we’re partway there already. “Alarmists” and “Luddites” are merely the ones who see the cliff’s edge ahead and are frantically pointing at it, trying to catch everyone’s attention.

But apparently everyone else is having too much fun. Who invited these killjoys along anyway?

Read More
Brad East Brad East

All together now: social media is bad for reading

A brief screed about what we all know to be true: social media is bad for reading.

We don’t have to mince words. We don’t have to pretend. We don’t have to qualify our claims. We don’t have to worry about insulting the youths. We don’t have to keep mum until the latest data comes in.

Social media, in all its forms, is bad for reading.

It’s bad for reading habits, meaning when you’re on social media you’re not reading a book. It’s bad for reading attention, meaning it shrinks your ability to focus for sustained periods of time while reading. It’s bad for reading desires, meaning it makes the idea of sitting down with a book, away from screens and images and videos and sounds, seem dreadfully boring. It’s bad for reading style, meaning what literacy you retain while living on social media is trained to like all the wrong things and to seek more of the same. It’s bad for reading ends, meaning you’re less likely to read for pleasure and more likely to read for strictly utilitarian reasons (including, for example, promotional deals and influencer prizes and so on). It’s bad for reading reinforcement, meaning like begets like, and inserting social media into the feedback loop of reading means ever more of the former and ever less of the latter. It’s bad for reading learning, meaning your inability to focus on dense, lengthy reading is an educational handicap: you quite literally will know less as a result. It’s bad for reading horizons, meaning the scope of what you do read, if you read at all, will not stretch across continents, cultures, and centuries but will be limited to the here and now, (at most) the latest faux highbrow novel or self-help bilge promoted by the newest hip influencers; social media–inflected “reading” is definitionally myopic: anti-“diverse” on principle. Finally, social media is bad for reading imitation, meaning it is bad for writing, because reading good writing is the only sure path to learning to write well oneself. Every single writing tic learned from social media is bad, and you can spot all of them a mile away.

None of this is new. None of it is groundbreaking. None of it is rocket science. We all know it. Educators do. Academics do. Parents do. As do members of Gen Z. My students don’t defend themselves to me; they don’t stick up for digital nativity and the wisdom and character produced by TikTok or Instagram over reading books. I’ve had students who tell me, approaching graduation, that they have never read a single book for pleasure in their lives. Others have confessed that they found a way to avoid reading a book cover to cover entirely, even as they got B’s in high school and college. They’re not proud of this. Neither are they embarrassed. It just is what it is.

Those of us who see this and are concerned by it do not have to apologize for it. We don’t have to worry about being, or being accused of being, Luddites. We’re not making this up. We’re not shaking our canes at the kids on the lawn. We’re not ageist or classist or generation-ist or any other nonsensical application of actual prejudices.

The problem is real. It’s not the only one, but it’s pressing. Social media is bad in general, it’s certainly bad for young people, and it’s unquestionably, demonstrably, and devastatingly bad for reading.

The question is not whether it’s a problem. The question is what to do about it.

Read More
Brad East Brad East

How do you spend your time in the office?

Counting hours and organizing time in the office as a professor.

A couple years back I wrote a long series of posts reflecting on what it’s like to teach a 4/4 load, how to manage one’s time, how to make time for research, and so on. Recently a friend was mentally cataloguing how he spends his hours in the office, both for himself and for higher-ups. So I thought I’d work up a list of ways academics use their time in the office. I came up with twenty-five categories, though I’m sure I’ve overlooked some. The hardest part was thinking about scholars outside of the humanities (you know, people who work with “hard data” in the “lab” and create “spreadsheets” with “numbers”—things I’ve only ever heard of, never encountered in the wild). Here’s the list, with relevant glosses where necessary, followed by a few reflections and a breakdown of my own “office hours”:

  • Teaching (in the classroom or online)

  • Grading

  • Lesson prep

  • LMS management (think Blackboard, Canvas, etc.)

  • Writing

  • Reading

  • Supervising (a G.A. or doctoral student, or providing official hours toward licensure)

  • Experiments/lab work

  • Data collection/collation/analysis

  • Meetings with students

  • Meetings with colleagues

  • Emailing

  • Phone calls

  • Admin work

  • Committee work

  • Para-academic work (blind reviewer, organizing a conference panel, being interviewed on a podcast, etc.)

  • Vocational work (say, seeing clients as a therapist or making rounds at the hospital)

  • Church work (preparing a sermon or curriculum)

  • Family duties (paying bills, ordering gifts, taking the car to the shop, leaving early to pick up kids, staying home with a sick child)

  • Loafing (hallway chats, office drop-ins, breeze-shooting)

  • Eating (whether alone or with others)

  • Devotion (prayer, meditation, silence and solitude)

  • Social media (call this digital loafing)

  • Filler (walking, parking, shuttle, etc.)

  • Other (this is here, lol, so you don’t have to think about questions like “how much time per semester do I spend in the restroom?”)

Now suppose you are (a) full-time faculty with responsibilities in (b) teaching, (c) research, and (d) service, and that a typical work week is (e) Monday through Friday, 8:00am to 5:00pm, spent in an office. That comes to 45 hours. How do you spend it in a given semester? That’s a factual question. It’s paired with the aspirational: How do you wish you spent it?

Answers to both are going to vary widely and be dependent on discipline, institution, temperament, desire, will, gifts, talents, interests, and job description. The chair of a physics department with 2/2 teaching loads will spend her time differently than an English prof with a 4/4 load and no administrative duties. So on and so forth.

Here’s how my time breaks down in an average week this semester (numbers after each category are estimated average weekly hours spent on that activity):

  • Teaching – 9

  • Grading – 1

  • Lesson prep – 2-3

  • Writing – 3-6

  • Reading – 10-15

  • Meetings with students – 1

  • Meetings with colleagues – 0-1

  • Emailing – 3-5

  • Committee work – 0-1

  • Para-academic work – 0-1

  • Church work – 0-1

  • Family duties – 6

  • Eating – 1-2

  • Devotion – 1

  • Other – 1

  • LMS management, supervising, lab work, data collection, phone calls, social media, loafing, vocational work, admin work, filler – 0

Check my work, but I think the numbers add up: at a minimum, these activities come to 38 out of 45 hours, with an unfixed 7 hours or so in which to apportion the remaining 16 in variable ways, given the week and what’s going on, what’s urgent, etc.

Now for commentary:

  • Note well what’s in the “zero hours” category: admin work, social media, supervising students of any kind (including in a lab), and LMS management. Anybody with duties or habits along these lines will have a seriously different allotment of hours than I do.

  • Notice what’s low in my weekly hours: email (which I wrote about yesterday), grading, meetings of any kind, and in general duties beyond teaching and research. On a good week, 30 of my 45 hours are spent teaching, reading, and writing. That’s not for everyone—nor is it a viable option for many—but it’s the result, among other things, of how I organize and discipline my time in the office. Teaching, reading, and writing are the priority. Everything else is secondary.

  • Well, not quite. Notice what’s (atypically?) high: family duties. I drop off our kids at school every morning, which means I sit down at my desk no earlier than 8:00am, sometimes closer to 8:30am. At least twice (often thrice) per week I leave around 3:00pm to pick them up, too. So if we’re thinking of the standard “eight to five office day,” that’s an average of at least five hours weekly that I’m not in the office. That’s not to mention dentist and doctor appointments, the flu, stomach bugs, Covid, choir performances, second grad programs, and the rest. Not everyone has kids, and kids don’t remain little forever, but this is worth bearing in mind for many faculty!

  • (Addendum: I am—we are—fortunate to have an employer and a type of job that permit the sort of flexibility that doesn’t require us paying for a babysitter or childcare after the school bell rings. That’s a whole different matter.)

  • Because I’m piloting a new class this semester, I’m teaching only three courses for a total of nine hours. Typically it’s four courses totaling twelve hours. In that case, in the absence of a new prep I wouldn’t have two to three hours weekly devoted to lesson planning. Once I’ve taught a class a few times, it’s plug and play. In other words, if this were describing my semester a year ago, teaching would have twelve hours next to it and lesson prep would have zero.

  • I should add, while I’m thinking of it: Sometimes my reading goes way down when a writing project is nearing a deadline or in full swing. Neither of those things is true at present, so I’m doing more reading than writing this semester. But sometimes the numbers there flip flop, and I’m writing 15+ hours weekly and reading only 3-5 hours, if that.

  • My institution requires generous office hours for students to come by and talk to their professors. I keep those hours but encourage students to set up meetings in advance, whether in person after class or by email. That way I won’t randomly be gone (at home with a sick kid, say, or in a committee meeting), and we can both be efficient with our time. I’ve mentored students in the past, which involved more of a weekly time commitment. At the moment I probably meet with two or three students per week, usually for 15-30 minutes each. But I know that not only here at ACU but elsewhere there are professors who give five, ten, or more hours each week to meeting with students. So, once again, there’s the question of decisions, priorities, institutional expectations, personal giftedness, and tradeoffs.

  • I’ve already mentioned two of the three great timesucks for professors: email and social media. Both threaten to rob academics of hours of time they could otherwise be using on what they love or, at least, what’s important. I gave my advice yesterday for how to resist the lure of the inbox. The cure for social media is simple: Just delete it. It can’t steal away your time if it quite literally does not exist for you.

  • The third great timesuck is loafing. This isn’t a temptation for academics alone, but for any and all office workers. Who wouldn’t prefer to shoot the breeze with coworkers in lieu of putting one’s nose to the grindstone? To me, this is purely and simply a personal decision. Loafing is not only fun, but life-giving. Many office jobs aren’t endurable without a healthy dose of loafing. You can often tell the relative health of an office by the nature and extent of its occupants’ shared loafing. So don’t hear me knocking loafing. It’s a kind of social nutrient for office work. In its absence, the human beings who make up an office can wither and die. Having said that, when I say it’s a “decision” I mean to say, on one hand, that it’s active, not passive (one has agency in loafing—a sentence I hope I’m the first to have written); and, on the other, that it involves tradeoffs. I’ve loafed less and less over the years for the plain reason that I realized what loafing I did inevitably meant less reading and writing (not, mind you, less busywork or email or grading or meetings: those, like death and taxes, are certain and unavoidable features of the academic life). And if the equation is that direct—less loafing, more research—then given my priorities and the tradeoffs involved, I decided to do my best to cut it out as much as I could.

  • Not much else to add, except that, when I can avoid them, I don’t do “work lunches.” I’m sure the portrait painted here is sounding increasingly, even alarmingly, antisocial; the truth, however, is that every hour (every minute!) counts in a job like this one, and you have to be ruthless to find—by which I mean, make—time for what you value. I like my job. I would read and write theology in my spare time if I weren’t a professor. So I don’t want to waste time in the office if I can help it.

Read More
Brad East Brad East

Quit social porn

Samuel James is right: the social internet is a form of pornography. That means Christians, at least, should get off—now.

In the introduction to his new book, Digital Liturgies: Rediscovering Christian Wisdom in an Online Age, Samuel James makes a startling claim: “The internet is a lot like pornography.” He makes sure the reader has read him right: “No, that’s not a typo. I did not mean to say that the internet contains a lot of pornography. I mean to say that the internet itself—i.e., its very nature—is like pornography. There’s something about it that is pornographic in its essence.”

Bingo. This is exactly right. But let’s take it one step further.

A few pages earlier, James distinguishes the internet in general from “the social internet.” That’s a broader term for what we usually refer to as “social media.” Think not only Facebook, Twitter, Instagram, TikTok, et al, but also YouTube, Slack, Pinterest, Snapchat, Tumblr, perhaps even LinkedIn or Reddit and similar sites. In effect, any online platform that (a) “connects” strangers through (b) public or semi-public personal profiles via (c) proprietary algorithms using (d) slot-machine reward mechanisms that reliably alter one’s (e) habits of attention and (f) fame, status, wealth, influence, or “brand.” Almost always such a platform also entails (g) the curation, upkeep, reiteration, and perpetual transformation of one’s visual image.

This is the social internet. James is right to compare it to pornography. But he doesn’t go far enough. It isn’t like pornography. It’s a mode of pornography.

The social internet is social porn.

By the end of the introduction, James pulls his punch. He doesn’t want his readers off the internet. Okay, fine. I’m on the internet too, obviously—though every second I’m not on it is a second of victory I’ve snatched from defeat. But yes, it’s hard to avoid the internet in 2023. We’ll let that stand for now.

There is no good reason, however, to be on the social internet. It’s porn, after all, as we just established. Christians, at least, have no excuse for using porn. So if James and I are right that the social internet isn’t just akin to pornography but is a species of it, then he and I and every other Christian we know who cares about these things should get off the social internet right now.

That means, as we saw above, any app, program, or platform that meets the definition I laid out. It means, at a minimum, deactivating and then deleting one’s accounts with Facebook, Twitter, Instagram, and TikTok—immediately. It then means thinking long and hard about whether one should be on any para-social platforms like YouTube or Pinterest or Slack. Some people use YouTube rarely and passively, to watch the occasional movie trailer or live band performance, say, or how-to videos to help fix things around the house. Granted, we shouldn’t be too worried about that. But what about people who use it the way my students use it—as an app on their phone with an auto-populated feed they scroll just like IG or TT? Or what about active users and influencers with their own channels?

Get off! That’s the answer. It’s porn, remember? And porn is bad.

I confess I have grown tired of all the excuses for staying on the social internet. Let me put that differently: I know plenty of people who do not share my judgment that the social internet is bad, much less a type of porn. In that case, we lack a shared premise. But many people accept the premise; they might even go so far as to affirm with me that the social internet is indeed a kind of porn: just as addictive, just as powerful, just as malformative, just as spiritually depleting, just as attentionally sapping. (Such claims are empirical, by the way; I don’t consider them arguable. But that’s for another day.) And yet most of the people I have in mind, who are some of the most well-read and up-to-date on the dangers and damages of digital media, continue not only to maintain their social internet accounts but use them actively and daily. Why?

I’m at a point where I think there simply are no more good excuses. Alan Jacobs remarked to me a few years back, when I was wavering on my Twitter usage, that the hellsite in question was the new Playboy. “I subscribe for the articles,” you say. I’m sure you do. That might play with folks unconcerned by the surrounding pictures. For Christians, though, the gig is up. You’re wading through waist-high toxic sludge for the occasional possible potential good. Quit it. Quit the social internet. Be done with it. For good.

Unlike Lot’s wife, you won’t look back. The flight from the Sodom of the social internet isn’t littered with pillars of salt. The path is free and clear, because everyone who leaves is so happy, so grateful, the only question they ask themselves is what took them so long to get out.

Read More
Brad East Brad East

A decision tree for dealing with digital tech

Is the digital status quo good? If not, our actions (both personal and institutional) should show it.

Start with this question:

Do you believe that our, and especially young people’s, relationship to digital technology (=smartphones, screens, the internet, streaming, social media) is healthy, functional, and therefore good as is? Or unhealthy, dysfunctional, and therefore in need of immediate and drastic help?

If your answer is “healthy, functional, and good as is,” then worry yourself no more; the status quo is A-OK. If you answered otherwise, read on.

Now ask yourself this question:

Do the practices, policies, norms, and official statements of my institution—whether a family, a business, a university, or a church—(a) contribute to the technological problem, (b) maintain the digital status quo, or (c) interrupt, subvert, and cut against the dysfunctional relationship of the members of my institution to their devices and screens?

If your answer is (a) or (b) and yet you answered earlier that you believe our relationship to digital technology is in serious need of help, then you’ve got a problem on your hands. If your answer is (c), then well done.

Finally, ask yourself this:

How does my own life—the whole suite of my daily habits when no one’s looking, or rather, when everyone is looking (my spouse, my roommate, my children, my coworkers, my neighbors, my pastors, and so on)—reflect, model, and/or communicate my most basic beliefs about the digital status quo? Does the way I live show others that (a) I am aware of the problem (b) chiefly within myself and (c) am tirelessly laboring to respond to it, to amend my ways and solve the problem? Or does it evince the very opposite? So that my life and my words are unaligned and even contradictory?

At both the institutional and the personal level, it seems to me that answering these questions honestly and following them to their logical conclusions—not just in our minds or with our words but in concrete actions—would clarify much about the nature of our duties, demands, and decisions in this area of life.

Read More
Brad East Brad East

Quitting the Big Five

Could you quit all the companies that make up Silicon Valley’s Big Five? How hard would it be to reduce your footprint down to only one of them?

In a course I teach on digital tech and Christian practice, I walk through an exercise with students. I ask them to name the Big Five (or more) Silicon Valley companies that so powerfully define and delimit our digital lives. They can also name additional apps and platforms that take up time and space in their daily habits. I then ask them:

Supposing you continued to use digital technology—supposing, that is, you did not move onto a tech-free country ranch, unplugged from the internet and every kind of screen—how many of these Big Tech companies could you extract yourself from without serious loss? Put from another angle, what is the fewest possible such companies you need to live your life?

In my own life, I try to implement a modest version of this. I like to daydream, however, about a more radical version. Let me start with the former then turn to the latter.

In my own life, here’s my current entanglement with the Big Tech firms:

Meta: None whatsoever (I don’t have a Facebook or Instagram account), with the exception of WhatsApp, which is useful for international and other types of communication. Recently, though, I’ve been nudging those I talk to on WhatsApp to move to another app, so I could quit Zuckerberg altogether.

Microsoft: I use Word (a lot) and PowerPoint (some) and Excel (a bit). Though I’m used to all three, I could live without them—though I’d have two decades’ worth of Word files I’d need to archive and/or convert.

Google: I’ve had the same Gmail account for fifteen years, so it would be a real loss to give it up. I don’t use GoogleMaps or any other of Google’s smartphone apps. I use GoogleDocs (etc.) a bit, mostly when others want to collaborate; I avoid it, though, and would not miss it.

Amazon: I’m an Amazon originalist: I use it for books. We pay for Prime. We also use it to buy needs and gifts for our kids and others. For years I threw my body in front of purchasing an Alexa until my household outvoted me just this summer. Alas.

Apple: Here’s where they get me. I have an iPhone and a MacBook, and I finally gave in and started backing up with a paid account on iCloud. I use iPhoto and Messages and FaceTime and the rest. I’m sure my household will acquire an iPad at some point. In a word, I’m Apple-integrated.

Others: I don’t have TikTok or any other social media accounts. My household has a family Spotify account. I personally use Instapaper, Freedom, and Marco Polo. I got Venmo this summer, but I lived without it for a decade, and could delete it tomorrow. I use Dropbox as well as another online storage business. We have various streaming platforms, but they’ve been dwindling of late; we could live with one or two.

Caveat: I’m aware that digital entanglement takes more than one form, i.e., whether or not I have an Amazon or Gmail or Microsoft (or IBM!) “account,” I’m invariably interacting with, using, and possibly paying for their servers and services in a variety of ways without my even knowing it. Again, that sort of entanglement is unavoidable absent the (Butlerian/Benedictine) move to the wireless ranch compound. But I wanted to acknowledge my awareness of this predicament at least.

Okay. So what would it look like to minimize my formal Big Five “footprint”?

So far as I can see it, the answer is simple: Commit exclusively to one company for as many services as possible.

Now, this may be seriously unwise. Like a portfolio, one’s digital assets and services may be safest and best utilized when highly diversified. Moreover, it’s almost literally putting one’s eggs in a single basket: what if that basket breaks? What if the one company you trust goes bust, or has its security compromised, or finds itself more loyal to another country’s interests than one’s own, so on and so forth?

All granted. This may be a foolish endeavor. That’s why I’m thinking out loud.

But supposing it’s not foolish, it seems to me that the simplest thing to do, in my case, would be to double down on Apple. Apple does hardware and software. They do online storage. They do TV and movies. They do music and podcasts. They’re interoperable. They have Maps and email and word processors and slideshows and the rest—or, if I preferred, I could always use third-party software for such needs (for example, I already use Firefox, not Safari or Chrome).

So what would it take, in my situation, to reduce my Big Tech footprint from five toes to three or two or even just one?

First, delete WhatsApp. Farewell, Meta!

Second, switch to Keynote and TextEdit (or Pages or Scrivener) and some unknown spreadsheet alternative, or whatever other programs folks prefer. Adios, Microsoft!

Third, download my Gmail archive and create a new, private, encrypted account with a trusted service. Turn to DuckDuckGo with questions. Turn to Apple for directions. Avoid YouTube like the plague. Adieu, Google!

Fourth, cancel Prime, ditch the Alexa, use local outlets for shopping, and order books from Bookshop.org or IndieBound.org or directly from publishers and authors. Get thee behind me, Bezos!

Fifth and finally, pray to the ghost of Steve Jobs for mercy and beneficence as I enter his kingdom, a humble and obedient subject—bound for life…

Whether or not it would be wise, could I seriously do this? I’m sort of amazed at how not implausible it sounds. The hardest thing would be leaving Microsoft Word behind, just because I’ve never used anything else, and I write a lot. The second hardest would be losing the speed, cheapness, and convenience of Amazon Prime for ordering books—but then, that’s the decision that would be best for my soul, and for authors, and for the publishing industry in general. As for life without Gmail, that would be good all around, which is why it’s the step I’m most likely to follow in the next few years.

In any case, it’s a useful exercise. “We” may “need” these corporations, at least if we want to keep living digital lives. But we don’t need all of them. We may even not need more than one.

Read More
Brad East Brad East

Tech bubble

From what I read online, I appear to live in a tech bubble: everyone’s addicted to it while knowing it’s bad. Are there really people who aren’t addicted? Are there really others who are addicted, but think it’s good?

Lately it’s occurred to me that I must live in an odd sort of tech bubble. It has two components.

On one hand, no one in my context (a medium-sized city in west Texas) lives in any way “free” from digital technology. Everyone has smartphones, laptops, tablets, and televisions with streaming apps. Most little kids have Kindles or iPads; most 10-12-years olds have phones; nearly every middle school has a smartphone. Women are on Instagram and TikTok, men are on Twitter and YouTube. Boys of every age play video games (Switch, XBOX, PS5), including plenty of dads. Adults and kids alike are on their phones during church, during sporting events, during choir performances. Kids watch Disney+ and PBS Kids; parents watch Max and Netflix. Screens and apps, Amazon and Spotify, phones and tablets galore: this is just daily ordinary life. There are no radicals among us. No slices of life carved out. I don’t know anyone without a TV, much less without wireless internet. I don’t know anyone without a smartphone! Life in west Texas—and everywhere else I’m aware of, at least in the Bible Belt—is just like this. No dissenting communes. No screen-free spaces. I’m the campus weirdo for not permitting devices in my classroom, and doubly so for not using a Learning Management System. Nor am I some hard-edged radical. I’m currently typing on a MacBook, and when I leave my office, I’ll listen to an audiobook via my iPhone.

In other words, whenever anyone tells me that the world I’ve just described isn’t normal, isn’t typical, isn’t entrenched and established and nigh unavoidable—I think, “Okay, we simply live in different worlds. I’d like to come tour yours. I’ve not seen it with my own eyes before.” I’m open to being wrong. But I admit to some measure of skepticism. In a nation of 330 million souls, is it meaningful to point to this or that solitary digital experimenter as a site of resistance? And won’t they capitulate eventually anyway?

But maybe not. What do I know?

Here’s the other hand, though. Everyone I know, tech-addled and tech-saturated though they be, everyone agrees that digital technology and social media are a major problem, perhaps the most significant social challenge, facing all of us and especially young people today. No one thinks it’s “no big deal.” No one argues that their kids vegging out on video games all day does nothing to their brains. No one pretends that Instagram and TikTok and Twitter are good for developing adolescents. No one supposes that more screen time is better for anyone. They—we—all know it’s a problem. They—we—just aren’t sure what to do about it. And since it seems such an enormously complex and massive overarching matrix, by definition a systemic problem calling for systemic solutions, mostly everyone just keeps on with life as it is. A few of us try to do a little better: quantifying our kids’ screen time; deleting certain apps; resisting the siren song of smartphones for 12-year-olds. But those are drops in the bucket. No one disputes the nature or extent of the problem. It’s just that no one knows how to fix it; or at least no one has the resolve to be the one person, the one household, in a city of 120,000 to say No! to the whole shebang. And even if there were such a person or household, they’d be a one of one. An extraordinary exception to the normative and unthreatened rule.

And yet. When I read online, I discover that there are people—apparently not insignificant in number?—who do not take for granted that the ubiquity and widespread use of social media, screens, and personal devices (by everyone, but certainly by young people) is a bad thing. In fact, these people rise in defense of Silicon Valley’s holy products, so much so that they accuse those of us worried about them of fostering a moral panic. Any and all evidence of the detrimental effects of teenagers being online four, six, eight hours per day is discounted in advance. It’s either bad data or bad politics. Until very recently I didn’t even realize, naive simpleton that I am, that worrying about these things was politicized. That apparently you out yourself as a reactionary if … checks notes … you aren’t perfectly aligned with the interests of trillion-dollar multinational corporations. That it’s somehow right-wing, rather than common-sense, to want children and young people to move their bodies, to be outdoors, to talk to one another face to face, to go on dates, to get driver’s licenses, to take road trips, to see concerts, to star gaze, to sneak out at night(!), to go to restaurants, to go to parks, to go on walks, to read novels they hold in their hands, to look people in the eye, to play the guitar, to go camping, to visit national parks, to play pick-up basketball, to mow the yard, to join a protest march, to tend a garden, to cook a meal, to paint, to leave the confines of their bedrooms and game rooms, to go to church, to go on a picnic, to have a first kiss—must I go on? No, because everyone knows these are reasonable things to want young people to do, and to learn to do, and even (because there is no other way) to make mistakes and take real risks in trying to learn to do. I know plenty of conservatives and plenty of progressives and all of them, not an exception among them, want their kids off social media, off streaming, off smartphones—on them, at a minimum, much less—and want them instead to do something, anything, out there in the bright blue real world we all share and live in together.

I must allow the possibility, however, that I inhabit a tech bubble. There appear to be other worlds out there. The internet says so. In some of them, I’m told, there are tech-free persons, households, and whole communities enjoying life without the tyrannous glare of the Big Five Big Brother staring back at them through their devices. And in other worlds, running parallel to these perhaps, tech is as omnipresent as it is in my neck of the woods, yet it is utterly benign, liberating, life-giving, and above all enhancing of young people’s mental health. The more screens the better, in that world. To know this is to be right-thinking, which is to say, left-thinking: enlightened and progressive and educated. To deny it is right-thinking in the wrong sense: conservative and benighted and backwards.

Oh, well. Perhaps I’ll visit one of these other worlds someday. For the time being, I’m stuck in mine.

Read More
Brad East Brad East

The take temptation

There is an ongoing series of essays being slowly published in successive issues of The New Atlantis I want to commend to you. They’re by Jon Askonas, a friend who teaches politics at Catholic University of America. The title for the series as a whole is “Reality: A Post-Mortem.” The essays are a bit hard to describe, but they make for essential reading.

There is an ongoing series of essays being slowly published in successive issues of The New Atlantis I want to commend to you. They’re by Jon Askonas, a friend who teaches politics at Catholic University of America. The title for the series as a whole is “Reality: A Post-Mortem.” The essays are a bit hard to describe, but they make for essential reading. They are an attempt to diagnose the root causes of, and the essential character of, the new state of unreality we find ourselves inhabiting today. The first, brief essay lays out the vision for the series. The second treats the gamified nature of our common life, in particular its analogues in novels, role-playing games, and alternate reality games (ARGs). The latest essay, which just arrived in my mailbox, is called “How Stewart Made Tucker.” Go read them all! (And subscribe to TNA, naturally. I’ve got an essay in the latest issue too.)

For now, I want to make one observation, drawing on something found in essay #2.

Jon writes (in one of a sequence of interludes that interrupt the main flow of the argument):

Several weeks have gone by since you picked your rabbit hole [that is, a specific topic about which there is much chatter but also much nonsense in public discourse and social media]. You have done the research, found a newsletter dedicated to unraveling the story, subscribed to a terrific outlet or podcast, and have learned to recognize widespread falsehoods on the subject. If your uncle happens to mention the subject next Thanksgiving, there is so much you could tell him that he wasn’t aware of.

 You check your feed and see that a prominent influencer has posted something that seems revealingly dishonest about your subject of choice. You have, at the tip of your fingers, the hottest and funniest take you have ever taken.

1. What do you do?

a. Post with such fervor that your followers shower you with shares before calling Internet 911 to report an online murder.

b. Draft your post, decide to “check” the “facts,” realize the controversy is more complex than you thought, and lose track of real work while trying to shoehorn your original take into the realm of objectivity.

c. Private-message your take, without checking its veracity, to close friends for the laughs or catharsis.

d. Consign your glorious take to the post trash can.

2. How many seconds did it take you to decide?

3. In however small a way, did your action nudge the world toward or away from a shared reality?

Let’s call this gamified reinforcement mechanism “the take temptation.” It amounts to the meme-ification of our common life and, therefore, of the common good itself. Jon writes earlier in the essay, redescribing the problem behind the problem:

We hear that online life has fragmented our “information ecosystem,” that this breakup has been accelerated by social division, and vice versa. We hear that alienation drives young men to become radicalized on Gab and 4chan. We hear that people who feel that society has left them behind find consolation in QAnon or in anti-vax Facebook groups. We hear about the alone-togetherness of this all.

What we haven’t figured out how to make sense of yet is the fun that many Americans act like they’re having with the national fracture.

Take a moment to reflect on the feeling you get when you see a headline, factoid, or meme that is so perfect, that so neatly addresses some burning controversy or narrative, that you feel compelled to share it. If it seems too good to be true, maybe you’ll pull up Snopes and check it first. But you probably won’t. And even if you do, how much will it really help? Everyone else will spread it anyway. Whether you retweet it or just email it to a friend, the end effect on your network of like-minded contacts — on who believes what — will be the same.

“Confirmation bias” names the idea that people are more likely to believe things that confirm what they already believe. But it does not explain the emotional relish we feel, the sheer delight when something in line with our deepest feelings about the state of the world, something so perfect, comes before us. Those feelings have a lot in common with how we feel when our sports team scores a point or when a dice roll goes our way in a board game.

It’s the relish of the meme, the fun of the hot take—all while the world burns—that Jon wants us to see so that he, in turn, can explain it. I leave the explanation to him. For my part, I’m going to do a bit of moralizing, aimed at myself first but offered here as a bit of stern encouragement to anyone who’s apt to listen.

The moral is simple: The take temptation is to be resisted at all costs, full stop. The take-industrial complex is not a bit of fun at the expense of others. It’s not a victimless joke. It is nothing less than your or my small but willing participation in unraveling the social fabric. It is the false catharsis that comes from treating the goods in common we hope to share as a game, to be won or lost by cheap jokes and glib asides. Nor does it matter if you reserve the take or meme for like-minded friends. In a sense that’s worse. The tribe is thereby reinforced and the Other thereby rendered further, stranger, more alien than before. You’re still perpetuating the habit to which we’re all addicted and from which we all need deliverance. You’re still feeding the beast. You’re still heeding the sly voice of the tempter, whose every word is a lie.

The only alternative to the take temptation is the absolutely uncool, unrewarding, and unremunerative practice of charity for enemies, generosity of spirit, plainness of prose, and perfect earnestness in argument. The lack of irony is painful, I know; the lack of sarcasm, boring; the lack of grievance, pitiful. So be it. Begin to heal the earth by refusing to litter; don’t wish the world rid of litter while tossing a Coke can out the window.

This means not reveling in the losses of your enemies, which is to say, those friends and neighbors for whom Christ died with whom you disagree. It means not joking about that denomination’s woes. It means not exaggerating or misrepresenting the views of another person, no matter what they believe, no matter their character, no matter who they are. It means not pretending that anyone is beyond the pale. It means not ridiculing anyone, ever, for any reason. It means, practically speaking, not posting a single word to Twitter, Instagram, Facebook, or any other instrument of our digital commons’ escalating fracture. It means practicing what you already know to be true, which is that ninety-nine times out of one hundred, the world doesn’t need to know what you think, when you think it, by online means.

The task feels nigh impossible. But resistance isn’t futile in this case. Every minor success counts. Start today. You won’t be sorry. Nor will the world.

Read More
Brad East Brad East

Six months without podcasts

Last September I wrote a half-serious, half-tongue-in-cheek post called “Quit Podcasts.” There I followed my friend Matt Anderson’s recommendation to “Quit Netflix” with the even more unpopular suggestion to quit listening to podcasts. As I say in the post, the suggestion was two-thirds troll, one-third sincere. That is, I was doing some public teasing, poking the bear of everyone’s absolutely earnest obsession with listening to The Best Podcasts all day every day.

Last September I wrote a half-serious, half-tongue-in-cheek post called “Quit Podcasts.” There I followed my friend Matt Anderson’s recommendation to “Quit Netflix” with the even more unpopular suggestion to quit listening to podcasts. As I say in the post, the suggestion was two-thirds troll, one-third sincere. That is, I was doing some public teasing, poking the bear of everyone’s absolutely earnest obsession with listening to The Best Podcasts all day every day. Ten years ago, in a group of twentysomethings, the conversation would eventually turn to what everyone was watching. These days, in a group of thirtysomethings, the conversation inexorably turns to podcasts. So yes, I was having a bit of fun.

But not only fun. After 14 years of listening to podcasts on a more or less daily basis, I was ready for something new. Earlier in the year I’d begun listening to audiobooks in earnest, and in September I decided to give up podcasts for audiobooks for good—or at least, for a while, to see how I liked it. Going back and forth between audiobooks and podcasts had been fine, but when the decision is between a healthy meal and a candy bar, you’re usually going to opt for the candy bar. So I cut out the treats and opted for some real food.

That was six months ago. How’s the experiment gone? As well as I could have hoped for. Better, in fact. I haven’t missed podcasts once, and it’s been nothing but a pleasure making time for more books in my life.

Now, before I say why, I suppose the disclaimer is necessary: Am I pronouncing from on high that no one should listen to podcasts, or that all podcasts are merely candy bars, or some such thing? No. But: If you relate to my experience with podcasts, and you’re wondering whether you might like a change, then I do commend giving them up. To paraphrase Don Draper, it will shock you how much you won’t miss them, almost like you never listened to them in the first place.

So why has it been so lovely, life sans pods? Let me count the ways.

1. More books. In the last 12 months I listened to two dozen works of fiction and nonfiction by C. S. Lewis and G. K. Chesterton alone. Apart from the delight of reading such wonderful classics again, what do you think is more enriching for my ears and mind? Literally any podcast produced today? Or Lewis/Chesterton? The question answers itself.

2. Not just “more” books, but books I wouldn’t otherwise have made the time to read. I listened to Fahrenheit 451, for example. I hadn’t read it since middle school. I find that I can’t do lengthy, complex, new fiction on audio, but if it’s a simple story, or on the shorter side, or one whose basic thrust I already understand, it goes down well. I’ve been in a dystopian mood lately, and felt like revisiting Bradbury, Orwell, Huxley, et al. But with a busy semester, sick kids, long evenings, finding snatches of time in which to get a novel in can be difficult. But I always have to clean the house and do the dishes. Hey presto! Done and done. Many birds with one stone.

3. Though I do subscribe to Audible (for a number of reasons), I also use Libby, which is a nice way to read/listen to new books without buying them. That’s what I did with Oliver Burkeman’s Four Thousand Weeks—another book that works well on audio. I’ve never been much of a local library patron, except for using university libraries for academic books. This is one way to patronize my town’s library system while avoiding spending money I don’t have on books I may not read anytime soon.

4. I relate to Tyler Cowen’s self-description as an “infogore.” Ever since I was young I have wanted to be “in the know.” I want to be up to date. I want to have read and seen and heard all the things. I want to be able to remark intelligently on that op-ed or that Twitter thread or that streaming show or that podcast. Or, as it happens, that unprovoked war in eastern Europe. But it turns out that Rolf Dobelli is right. I don’t need to know any of that. I don’t need to be “in the know” at all. Seven-tenths is evanescent. Two-tenths is immaterial to my life. One-tenth I’ll get around to knowing at some point, though even then I will, like everyone else, overestimate its urgency.

That’s what podcasts represent to me: either junk food entertainment or substantive commentary on current events. To the extent that that is what podcasts are, I am a better person—a less anxious, more contemplative, more thoughtful, less showy—for having given them up.

Now, does this description apply to every podcast? No. And yet: Do even the “serious” podcasts function in this way more often than we might want to admit? Yes.

In any case, becoming “news-resilient,” to use Burkeman’s phrase, has been one of the best decisions I’ve made in a long time. My daily life is not determined by headlines—print, digital, or aural. Nor do I know what the editors at The Ringer thought of The Batman, or what Ezra Klein thinks of Ukraine, or what the editors at National Review think of Ukraine. The truth is, I don’t need to know. Justin E. H. Smith and Paul Kingsnorth are right: the number of people who couldn’t locate Ukraine on a map six weeks ago who are now Ukraine-ophiles with strong opinions about no-fly zones and oil sanctions would be funny, if the phenomenon of which they are a part weren’t so dangerous.

I don’t have an opinion about Ukraine, except that Putin was wrong to invade, is unjust for having done so, and should stop immediately. Besides praying for the victims and refugees and for an immediate cessation to hostilities, there is nothing else I can do—and I shouldn’t pretend otherwise. That isn’t a catchall prohibition, as though others should not take the time, slowly, to learn about the people of Ukraine, Soviet and Russian history, etc., etc. Anyone who does that is spending their time wisely.

But podcasts ain’t gonna cut it. Even the most sober ones amount to little more than propaganda. And we should all avoid that like the plague, doubly so in wartime.

The same goes for Twitter. But then, I quit that last week, too. Are you sensing a theme? Podcasts aren’t social media, but they aren’t not social media, either. And the best thing to do with all of it is simple.

Sign off.

Read More
Brad East Brad East

The meta mafia

Here’s something Mark Zuckerberg said during his indescribably weird announcement video hawking Facebook’s shift into the so-called metaverse: There are going to be new ways of interacting with devices that are much more natural. Instead of typing or tapping, you’re going to be able to gesture with your hands, say a few words, or even just make things happen by thinking about them. Your devices won’t be the focal point of your attention anymore. Instead of getting in the way, they’re going to give you a sense of presence in the new experiences that you’re having and the people who you’re with. And these are some of the basic concepts for the metaverse.

Here’s something Mark Zuckerberg said during his indescribably weird announcement video hawking Facebook’s shift into the so-called metaverse:

There are going to be new ways of interacting with devices that are much more natural. Instead of typing or tapping, you’re going to be able to gesture with your hands, say a few words, or even just make things happen by thinking about them. Your devices won’t be the focal point of your attention anymore. Instead of getting in the way, they’re going to give you a sense of presence in the new experiences that you’re having and the people who you’re with. And these are some of the basic concepts for the metaverse.

There are many things to say about this little snippet—not to mention the rest of his address. (I’m eagerly awaiting the 10,000-word blog-disquisition on the Zuck’s repeated reference to “presence” by contradistinction to real presence.) But here’s the main thing I want to home in on.

In our present practice, per the CEO of Facebook-cum-Meta, devices have become the “focal point” of our “attention.” Whereas in the future being fashioned by Meta, our devices will no longer “get in the way” of our “presence” to one another. Instead, those pesky devices now out of the way, the obstacles will thereby be cleared for the metaverse to facilitate, mediate, and enhance the “sense of presence” we’ve lost in the device-and-platform obsessions of yesteryear.

Now suppose, as a thought experiment, that an anti–Silicon Valley Luddite wanted to craft a statement and put it into the mouthpiece of someone who stands for all that’s wrong with the digital age and our new digital overlords—a statement rife with subtle meanings legible only to the initiated, crying out for subtextual Straussian interpretation. Would one word be different in Zuckerberg’s script?

To wit: a dialogue.

*

Question: Who introduced those ubiquitous devices, studded with social media apps, that now suck up all our attention, robbing us of our sense of presence?

Answer: Steve Jobs, in cahoots with Mark Zuckerberg, circa 15 years ago.

Question: And what is the solution to the problem posed by those devices—a problem ranging from partisan polarization to teen addiction to screens to social anomie to body image issues to political unrest to reduced attention spans to diffuse persistent low-grade anxiety to massive efforts at both disinformation and censorship—a problem, recall, introduced by (among others) Mr. Zuckerberg, which absorbs and deprives us all, especially young people, of the presence and focus necessary to inhabit and navigate the real world?

Answer: The metaverse.

Question: And what is that?

Answer: An all-encompassing digital “world” available via virtual reality headsets, in which people, including young people, may “hang out” and “socialize” (and “teleport” and, what is the Zuck’s go-to weasel word, “connect”) for hours on end with “avatars” of strangers that look like nothing so much as gooey Minecraft simulacra of human beings.

Question: And who, pray tell, is the creator of the metaverse?

Answer: Why, the creator of Facebook: Mark Zuckerberg, CEO of Meta.

Question: So the problem with our devices is not that they absorb our attention, but that they fall short of absolute absorption?

Answer: Exactly!

Question: So the way the metaverse resolves the problem of our devices’ being the focal point of our attention is by pulling us into the devices in order to live inside them? So that they are no longer “between” or “before” us but around us?

Answer: Yes! You’re getting closer.

Question: Which means the only solution to the problem of our devices is more and better devices?

Answer: Yes! Yes! You’re almost there!

Question: Which means the author of the problem is the author of the solution?

Answer: You’ve got it. You’re there.

Question: In sum, although the short-sighted and foolish-minded among us might suppose that the “obvious solution” to the problem of Facebook “is to get rid of Facebook,” the wise masters of the Valley propose the opposite: “Get rid of the world”?

Answer: This is the way. You’ve arrived. Now you love Big Zuckerberg.

*

Lest there be any doubt, there is a name for this hustle. It’s a racket. When a tough guy smashes the windows of your shop, after which one of his friends comes by proposing to protect you from local rowdy types, only (it goes without saying) he requires a weekly payment under the table, you know what’s happening. You’re not grateful; you don’t welcome the proposal. It’s the mafia. And mob protection is fake protection. It’s not the real thing. The threat and the fix wear the same face. Maybe you aren’t in a position to decline, but either way you’re made to be a fool. Because the humiliation is the point.

Mark Zuckerberg is looking us in the eyes and offering to sell us heroin to wean us off the cocaine he sold us last year.

It’s a sham; it’s a racket; it’s the mob; it’s a dealer.

Just say no, y’all.

Just. Say. No.

Read More
Brad East Brad East

Twitter loci communes

One of these days Twitter will be no more. Or at least my Twitter account. Whether that future is distant or near, it will happen. I stopped actively tweeting or even retweeting anything beyond links to my published work a couple months into the pandemic. That was after resuming “normal” Twitter activity following a self-imposed months-long hiatus.

One of these days Twitter will be no more. Or at least my Twitter account. Whether that future is distant or near, it will happen. I stopped actively tweeting or even retweeting anything beyond links to my published work a couple months into the pandemic. That was after resuming “normal” Twitter activity following a self-imposed months-long hiatus. During that whole period of time, across the last two years or so, I’ve seriously contemplated deleting my account more than once. I’ve come very close. But I haven’t quite been able to quit having an account, even if I’ve successfully quit replying to mentions, liking tweets, retweeting, “engaging” in “the discourse,” etc. I also don’t scroll the feed—ever. I spend 5-15 minutes per day on Twitter, by which I mean, unless I’m sharing a new publication, I check the same 3-6 writers’ accounts the way I “follow” RSS feeds on Feedly. I’m more or less happy with my Twitter usage, then, though I continue to think the platform the purest of poisons on our common life. If I had a button to destroy it tomorrow, I’d press it in a heartbeat.

So. Since I’m not “on” Twitter in a strong way anymore, and since I’m confident neither the site nor my account is long for this world, and since before I stopped being an active user I had some pleasant conversations and wrote a few fun threads, I’ve been thinking about how to maintain, or transmit, some of that. Here’s my answer.

Twitter loci communes.

The Latin means “common places.” What I’m going to do on this blog, intermittently and with no plan of action, is reproduce topics and threads and lines of thought I developed on Twitter sometime in the years since I created an account in 2013. Not by embedding the tweets but simply by copying and pasting them here, either as normal prose or in block quotes.

In fact, I’ve already done that twice: earlier this year in response to ACU’s upset of UT in March Madness and a couple weeks back on the feast of St. Monica. We’ll count those as TLC #1 and #2. Next will be #3, whatever and whenever that may be. But I’ll link back to this page for future posts so that folks know what it is, and I’ll tag all TLC posts (including those two retroactive ones) as such, so anybody who’s interested—all two dozen of you—can track them down.

Twitter may not be all evil; perhaps it’s only 99%. This little side project is a way of preserving the 1%, if only for myself.

Read More
Brad East Brad East

Charity

What if, when a person you are reading or listening to states a conviction or comes to a conclusion with which you disagree, your first thought were not that such a person must, by necessary consequence, be wicked, stupid, cruel, incurious, unserious, or otherwise worthy of public censure and ridicule?

What if, when a person you are reading or listening to states a conviction or comes to a conclusion with which you disagree, your first thought were not that such a person must, by necessary consequence, be wicked, stupid, cruel, incurious, unserious, or otherwise worthy of public censure and ridicule?

What if, when disagreement obtains between persons or groups, we understood that disagreement to be neither absolute nor permanent nor exclusive of friendship, neighborliness, mutual respect, and generosity?

What would happen if we all acted on what we already know to be true, namely, that social media—Twitter above all—is inhospitable to reasoned discourse and charitable interpretation? that it is not a sounding board for honest reflection but a storehouse of mental waste, emotional disquiet, and psychic poison? that irony and mockery are not bugs accidental to the system, but features endemic to it? that every second spent on it is invariably a malformation of one’s mind, heart, soul, and habits of attention? that the only worthwhile thing to do with Twitter et al is not “be a better user” but blast it into the sun?

Read More
Brad East Brad East

Twitter, Twitter, Twitter

I've written at length on this blog about my relationship to digital technology in general and to Twitter in particular. I deleted my Facebook account. I'm not on Instagram, Snapchat, TikTok, Tumblr, or anything else. I'm part of one Slack channel, which isn't work-related and is quite life-giving, but I regulate my time on there nonetheless. I use Freedom to block access to the internet for large stretches of the day. The only remaining social media platform in my life—beyond the evils of YouTube and Google, from which I hope to find a way to extricate myself sooner rather than later—is Twitter.

I gave Twitter up for two months last fall, and it was great. But I was persuaded to return, at least in modest ways, given the relationships and connections I'd developed using it. Then Covid hit, and I ditched the rules and was on it far too much from spring break to Memorial Day. But these last two months I have come round to the same conclusion that instigated my initial dropout: it can't be saved. It can't be redeemed. It's purest poison. It cannot be defanged; it only lies in wait. But dormancy is not safety. Twitter is a cancer and the only path forward is digital chemotherapy.

So I stopped liking, retweeting, or tweeting (with a couple small exceptions). I got off for all but 10-15 minutes per day. That's my happy new normal. Here are the steps I plan to take in the coming weeks.

First, to delete all my previous likes and retweets, and most or all of my tweets.

Second, to use it only as a kind of RSS feed for a handful of writers I follow. So that means no scrolling on the Home page, only going to specific profiles and reading their tweets or following their links. But still, no more than a dozen minutes a day, tops.

Third, in terms of my own profile and usage, I will cease liking or replying to others. I will use Twitter henceforth exclusively as a "public facing" repository of links to things I've written, or plainly worded professional information. I'm going to leave my account up—for now—as a one-stop-shop that makes it easy to find me, my work, my blog, my contact info. In other words, I want my Twitter presence to be uniformly boring. I want to be a bad follow.

Fourth, however, I am going to think hard about deleting my account once and for all. I may deactivate for the month of August or September to see how it feels. I've yet to make a final decision about that. If it is true, as I say, that Twitter is a poison and a cancer, then even a boring links-heavy profile like mine keeps the poison in circulation. I don't want to contribute to that. I want to keep contracting my digital footprint until it is comprises nothing but (a) what I've written in formal venues and (b) online space I own.

So we'll see how it goes. Part of this footprint-contraction plan entails more blogging: instead of emailing, texting, or Slack-commenting my ideas and observations, I want to write them out here. That's the goal, at least. I'll check in with reports and reflections as the plan is executed or, as the case may be, aborted or audibled.

In any case, lest this short post was too long and you didn't read: Get off Twitter. It's from the evil one.
Read More
Brad East Brad East

On blissful ignorance of Twitter trends, controversies, beefs, and general goings-on

Being off Twitter continues to be good for my soul as well as my mind, and one of the benefits I'm realizing is the ignorance that comes as a byproduct. By which I mean, ignorance not in general or of good things but of that which it is not beneficial to know.

When you're on Twitter, you notice what is "trending." This micro-targeted algorithmic function shapes your experience of the website, the news, the culture, and the world. Even if it were simply a reflection of what people were tweeting about the most, it would still be random, passing, and mass-generated. Who cares what is trending at any one moment?

More important, based on the accounts one follows, there is always some tempest in a teacup brewing somewhere or other. A controversy, an argument, a flame war, a personal beef: whatever its nature, the brouhaha exerts a kind of gravitational pull, sucking us poor online plebs into its orbit. And because Twitter is the id unvarnished, the kerfuffle in question is usually nasty, brutish, and unedifying. Worst of all, this tiny little momentary conflict warps one's mind, as if anyone cares except the smallest of online sub-sub-sub-sub-sub-sub-sub-"communities." For writers, journalists and academics above all, these Twitter battles start to take up residence in the skull, as if they were not only real but vital and important. Articles and essays are written about them; sometimes they are deployed (with earnest soberness) as a synecdoche for cultural skirmishes to which they bear only the most tangential, and certainly no causal, relationship.

As it turns out, when you are ignorant of such things, they cease in any way to weigh down one's mind, because they might as well not have happened. (If a tweet is dunked on but no one sees it, did the dunking really occur?) And this is all to the good, because 99.9% of the time, what happens on Twitter (a) stays on Twitter and (b) has no consequences—at least for us ordinary folks—in the real world. Naturally, I'm excluding e.g. tweets by the President or e.g. tweets that will get one fired. (Though those examples are just more reasons not to be on Twitter: I suppose if all such reasons were written down even the whole world would not have room for the books that would be written.) What I mean is: The kind of seemingly intellectually interesting tweet-threads and Twitter-arguments are almost never (possibly never-never) worth attending to in the moment.

Why? First, because they're usually stillborn: best not to have read them in the first place; there is always a better use of one's time. Second, because, although they feel like they are setting the terms of this or that debate, they are typically divorced from said debate, or merely symptoms of it, or just reflections of it: but in most cases, not where the real action is happening. Third, because if they're interesting enough—possibly even debate-setting enough—their author will publish them in an article or suchlike that will render redundant the original source of the haphazard thoughts that are now well organized and digestible in an orderly sequence of thought. Fourth and finally, because if a tweet or thread is significant enough (this is the .01% leftover from above), someone will publish about it and make known to the rest of us why it is (or, as the case may be, is actually not) important. In this last case, there is a minor justification for journalists not to delete their Twitter accounts; though the reasons for deletion are still strong, they can justify their use of the evil website (or at least spending time on it: one can read tweets without an account). For the rest of us, we can find out what happened on the hellscape that is Twitter in the same way we get the rest of our news: from reputable, established outlets. And not by what's trending at any one moment.

For writers and academics, the resulting rewards are incomparable. The time-honored and irrefutable wisdom not to read one's mentions—corrupting the mind, as it does, and sabotaging good writing—turns out to have broader application. Don't just avoid reading your mentions. Don't have mentions to read in the first place.
Read More
Brad East Brad East

Are there good reasons to stay on Twitter?

Earlier this week Nolan Lawson wrote a brief post extolling people to get off Twitter. He opens it by saying, "Stop complaining about Twitter on Twitter. Deny them your attention, your time, and your data. Get off of Twitter. The more time you spend on Twitter, the more money you make for Twitter. Get off of Twitter."

Alan Jacobs picked up on this post and wrote in support: "The decision to be on Twitter (or Facebook, etc.) is not simply a personal choice. It has run-on effects for you but also for others. When you use the big social media platforms you contribute to their power and influence, and you deplete the energy and value of the open web. You make things worse for everyone. I truly believe that. Which is why I’m so obnoxiously repetitive on this point."

I've written extensively about my own habits of technology and internet discipline. I deleted my Facebook account. I don't have any social media apps on my iPhone; nor do I even have access to email on there. I use it for calls, texts, podcasts, pictures of my kids (no iCloud!), directions, the weather, and Instapaper. I use Freedom to eliminate my access to the internet, on either my phone or my laptop, for 3-4 hours at a time, two to three times a day. I don't read articles or reply to emails until lunch time, then hold off until end of (work) day or end of (actual) day—i.e., after the kids go to bed. I'm not on Instagram or Snapchat or any of the new social media start-ups.

So why am I still on Twitter? I'm primed to agree with Lawson and Jacobs, after all. And I certainly do agree, to a large extent: Twitter is a fetid swamp of nightmarish human interaction; a digital slot machine with little upside and all downside. I have no doubt that 90% of people on Twitter need to get off entirely, and 100% of people on Twitter should use it 90% less than they do. Twitter warps the mind (journalism's degradation owes a great deal to @Jack); it is unhealthy for the brain and damaging for the soul. No one who deleted their Twitter account would become a less well-rounded, mentally and emotionally and spiritually fulfilled person.

So, again: Why am I still on Twitter? Are there any good reasons to stay?

For me, the answer is yes. The truth is that for the last 3 years (the main years of my really using it) my time on Twitter has been almost uniformly positive, and there have been numerous concrete benefits. At least for now, it's still worth it to me.

How has that happened? Partly I'm sure by dumb luck. Partly by already having instituted fairly rigorous habits of discipline (it's hard to fall into the infinite scroll if the scroll is inaccessible from your handheld device! And the same goes for instant posting, or posting pictures directly from my phone, which I can't do, or for getting into flame wars, or for getting notifications on my home screen, which I don't—since, again, it's not on my phone, and my phone is always (always!) on Do Not Disturb and Silent and, if I'm in the office, on Airplane Mode; you get it now: the goal is to be uninterrupted and generally unreachable).

Partly it's my intended mode of presence on Twitter: Be myself; don't argue about serious things with strangers; only argue at all if the other person is game, the topic is interesting, and the conversation is pleasant or edifying or fun; always think, "Would my wife or dad or best friend or pastor or dean or the Lord Jesus himself approve of this tweet?" (that does away with a lot of stupidity, meanness, and self-aggrandizement fast). As a rule, I would like for people who "meet" me on Twitter to meet me in person and find the two wholly consonant. Further, I try hard never to "dunk" on anyone. Twitter wants us to be cruel to one another: why give in?

I limit my follows fairly severely: only people I know personally, or read often, or admire, or learn something from, or take joy in following. For as long as I'm on Twitter I would like to keep my follows between 400 and 500 (kept low through annual culling). The moment someone who follows me acts cruelly or becomes a distraction, to myself or others, I immediately mute them (blocks are reserved, for now, for obvious bots). I don't feel compelled to respond to every reply. And I tend to "interface" with Twitter not through THE SCROLL but through about a dozen bookmarked profiles of people, usually writers or fellow academics, who always have interesting things to say or post links worth saving for later. All in all, I try to limit my daily time on Twitter to 10-30 minutes, less on Saturdays and (ordinarily, or aspirationally) zero on Sundays—at least so long as the kids are awake.

So much for my rules. What benefits have resulted from being on Twitter?

First, it appears that I have what can only be called a readership. Even if said readership comprises "only" a few hundred folks (I have just over a thousand followers), that number is greater than zero, which until very recently was the number of my readers not related to me by blood. And until such time (which will be no time) that I have thousands upon tens of thousands of readers—nay, in the millions!—it is rewarding and meaningful to interact with people who take the time to read, support, share, and comment on my work.

(That raises the question: Should the time actually come, and I'm sure that it will, when I am bombarded by trolls and the rank wickedness that erupts from the bowels of Twitter Hell for so many people? I will take one of two courses of action. I will adopt the policy of not reading my replies, as wise Public People do. But if that's not good enough, that will be the day, the very day, that I quit Twitter for good. And perhaps Lawson and Jacobs both arrived at that point long ago, which launched them off the platform. If so, good for them.)

Additionally, I have made contacts with a host of people across the country (and the world) with whom I share some common interest, not least within the theological academy. Some of these have become, or are fast becoming, genuine friendships. And because we theologians find reasons to gather together each year (AAR/SBL, SCE, CSC, etc.), budding online friendships actually generate in-person meetings and hangouts. Real life facilitated by the internet! Who would've thought?

I have also received multiple writing opportunities simply in virtue of being on Twitter. Those opportunities came directly or indirectly from embedding myself, even if (to my mind) invisibly, in networks of writers, editors, publishers, and the like. (I literally signed a book contract last week based on an email from an editor who found me on Twitter based on some writing and tweeting I'd done.) As I've always said, academic epistemology is grounded in gossip, and gossip (of the non-pejorative kind) depends entirely on who you know. The same goes for the world of publishing. And since writers and editors love Twitter—doubtless to their detriment—Twitter's the place to be to "hang around" and "hear" stuff, and eventually be noticed by one or two fine folks, and be welcomed into the conversation. That's happened to me already, in mostly small ways; but they add up.

So that's it, give or take. On a given week, I average 60-90 minutes on Twitter spread across 5-6 days, mostly during lunch or early evening hours, on my laptop, never on my phone, typically checking just a handful of folks' profiles, sending off a tweet or two myself, never battling, never feeding the trolls, saving my time and energy for real life (home, kids, church, friends) and for periods of sustained, undistracted attention at work, whether reading or writing.

Having said that, if I were a betting man, I would hazard a guess that I'll be off Twitter within five years, or that the site will no longer exist in anything like its current form. My time on Twitter is unrepresentative, and probably can't last. But so long as it does, and the benefits remain, I'll "be" there, and I think the reasons I've offered are sufficient to justify the decision.
Read More
Brad East Brad East

The value of keeping up with the news

If you've been paying attention the last two weeks, an ongoing controversy erupted Saturday, January 19, and is still unfolding in one of the many seemingly endless iterations such controversies generate today through social media, op-eds, and the like. Last week, in Alan Jacobs's newsletter, he wrote this:

"On Tuesday morning, January 22, I read a David Brooks column about a confrontation that happened on the National Mall during the March for Life. Until I read that column I had heard nothing about this incident because I do not have a Facebook account, have deleted my Twitter account, don’t watch TV news, and read the news about once a week. If all goes well, I won’t hear anything more about the story. I recommend this set of practices to you all."

This got me thinking about a post Paul Griffiths wrote on his blog years ago, perhaps even a decade ago (would that he kept that blog up longer!). He reflected on the ideal way of keeping up with the news—and, note well, this was before the rise of Twitter et al. as the driver of minute-by-minute "news" content. He suggested that there is no real good served in knowing what is going on day-to-day, whether that comes through the newspaper or the television. Instead, what one ought to do is slow the arrival of news to oneself so far as possible. His off-the-cuff proposal: subscribe to a handful of monthly or bimonthly publications ranging the ideological spectrum and, preferably, with a more global focus so as to avoid the parochialism not just of time but of space. Whenever the magazines or journals arrive, you devote a few hours to reading patient, time-cushioned reflection and reporting on the goings-on of the world—99% of which bears on your life not one iota—and then you continue on with your life (since, as should be self-evident to all of us, no one but a few family and friends needs to know what we think about it).

Consider how much saner your life, indeed all of our lives, would be if we did something like Griffiths' proposal. And think about how not doing it, and instead "engaging in the discourse," posting on Facebook, tweeting opinions, arguing online: how none of it does anything at all except raise blood pressure, foment discord, engender discontent, etc. Activists and advocates of local participatory democracy are fools if they think anything remotely like what we have now serves their goals. If we slowed our news intake, resisted the urge to pontificate, and paid more attention to the persons and needs and tasks before us, the world—as a whole and each of its parts—would be a much better place than it is at this moment.
Read More
Brad East Brad East

Writers who read their mentions

There may be nothing more poisonous for the quality of a writer's work than "reading your mentions." You can tell immediately when reading or listening to someone (say, in an interview or podcast). Everything is couched, defensive, anticipating the inevitable "ur the wurst" tweet-reply or comment at the end of the article. It doesn't matter the style of writing, or the subject. It's present in politics as much as in sports journalism. I suppose in certain sub-cultures of theology, it might actually be muted, because while the rabies theologorum is a vast, multi-headed beast, it feasts on numbers and passion. So you find it in evangelical arguments and intra-Catholic skirmishes—both of which communities are large enough to have sizeable Extremely Online contingents.

But academic theologians? Now that's a small group of folks. And surprisingly friendly online, at least in the corners I frequent.

Regardless, though, we're all susceptible to it. And by far the best writers, whatever their expertise, whatever their genre, whatever their politics or ideas, are those who write simultaneously for an imagined audience—you can't write for no one—yet an audience in no way represented by the writer's experience on social media. An audience of readers who aren't likely to tweet or comment, but who are there nonetheless, reading and thinking with the author.

They exist. Write for them. Don't read your mentions.
Read More