Resident Theologian

About the Blog

Brad East Brad East

A.I., TikTok, and saying “I would prefer not to”

Finding wisdom in Bartleby for a tech-addled age.

Two technology pieces from last week have stuck with me.

Both were at The New York Times. The first was titled “How TikTok Changed America,” a sort of image/video essay about the platform’s popularity and influence in the U.S. The second was a podcast with Ezra Klein called “How Should I Be Using A.I. Right Now?,” an interview with Ethan Mollick.

To be clear, I skimmed the first and did not listen to the second; I only read Klein’s framing description for the pod (my emphases):

There’s something of a paradox that has defined my experience with artificial intelligence in this particular moment. It’s clear we’re witnessing the advent of a wildly powerful technology, one that could transform the economy and the way we think about art and creativity and the value of human work itself. At the same time, I can’t for the life of me figure out how to use it in my own day-to-day job.

So I wanted to understand what I’m missing and get some tips for how I could incorporate A.I. better into my life right now. And Ethan Mollick is the perfect guide…

This conversation covers the basics, including which chatbot to choose and techniques for how to get the most useful results. But the conversation goes far beyond that, too — to some of the strange, delightful and slightly unnerving ways that A.I. responds to us, and how you’ll get more out of any chatbot if you think of it as a relationship rather than a tool.

These two pieces brought to mind two things I’ve written recently about social media and digital technology more broadly. The first comes from my New Atlantic essay, published two years ago, reviewing Andy Crouch’s book The Life We’re Looking For (my emphases again):

What we need is a recommitment to public argument about purpose, both ours and that of our tools. What we need, further, is a recoupling of our beliefs about the one to our beliefs about the other. What we need, finally, is the resolve to make hard decisions about our technologies. If an invention does not serve the human good, then we should neither sell it nor use it, and we should make a public case against it. If we can’t do that — if we lack the will or fortitude to say, with Bartleby, We would prefer not to — then it is clear that we are no longer makers or users. We are being used and remade.

The other comes late in my Commonweal review, published last summer, of Tara Isabella Burton’s book Self Made:

It may feel to some of us that “everyone,” for example, is on Instagram. Only about 15 percent of the world is on the platform, however. That’s a lot of people. Yet the truth is that most of the world is not on it. The same goes for other social media. Influencer culture may be ubiquitous in the sense that most people between the ages of fifteen and thirty-five are affected by it in some way. But that’s a far cry from digitally mediated self-creation being a universal mandate.

Even for those of us on these apps, moreover, it’s possible to opt out. You don’t have to sell yourself on the internet. You really don’t. I would have liked Burton to show us why the dismal story she tells isn’t deterministic—why, for example, not every young woman is fated to sell her image on OnlyFans sooner or later.

The two relevant phrases from these essay reviews: You really don’t and Bartleby’s I would prefer not to. They are quite simply all you need in your toolkit for responding to new technologies like TikTok and generative A.I.

For example, the TikTok piece states that half of Americans are on the app. That’s a lot! Plenty to justify the NYT treatment. I don’t deny it. But do you know what that claim also means? That half of us aren’t on it. Fifty percent. One out of every two souls. Which is the more relevant statistic, then? Can I get a follow-up NYT essay about the half of us who not only aren’t tempted to download TikTok but actively reject it, can’t stand it, renounce it and all its pomp?

The piece goes further: “Even if you’ve never opened the app, you’ve lived in a culture that exists downstream of what happens there.” Again, I don’t deny it or doubt it. It’s true, to my chagrin. And yet, the power of such a claim is not quite what it seems on first glance.

The downstream-influence of TikTok works primarily if and as one is also or instead an active user of other social media platforms (as well as, perhaps, cable news programs focused on politics and entertainment). I’m told you can’t get on YouTube or Instagram or Twitter or Facebook without encountering “imported” content from TikTok, or “local” content that’s just Meta or Google cribbing on TikTok. But what if, like me, you don’t have an account on any of these platforms? What if you abstain completely from all social media? And what if you don’t watch Fox News or MSNBC or CNN or entertainment shows or reality TV?

I was prepared, reading the NYT piece, to discover all the ways TikTok had invaded my life without my even realizing it. It turns out, though, that I don’t get my news from TikTok, or my movie recommendations, or my cooking recipes, or my fashion advice(!), or my politics, or my Swiftie hits, or my mental health self-diagnoses, or my water bottle, or my nightly entertainment before bed—or anything else. Nothing. Nada. Apparently I have been immune to the fifteen “hottest trends” on TikTok, the way it invaded “all of our lives.”

How? Not because I made it a daily goal to avoid TikTok. Not because I’m a digital ascetic living on a compound free of wireless internet, smart phones, streaming TV, and (most important) Gen Z kiddos. No, it’s because, and more or less only because, I’m not on social media. Turns out it isn’t hard to get away from this stuff. You just don’t download it. You just don’t create an account. If you don’t, you can live as if it doesn’t exist, because for all intents and purposes, for your actual life, it doesn’t.

As I said: You really don’t have to, because you can just say I would prefer not to. All told, that’s enough. It’s adequate all on its own. No one is forcing you to do anything.

Which brings us to Ezra Klein.

Sometimes Klein seems like he genuinely “gets” the scale of the threat, the nature of the digital monstrosity, the power of these devices to shape and rewire our brains and habits and hearts. Yet other times he sounds like just another tech bro who wants to maximize his digital efficiencies, to get ahead of the masses, to get a silicon leg up on the competition, to be as early an adopter as possible. I honestly don’t get it. Does he really believe the hype? Or does he not. At least someone like Tyler Cowen picks a lane. Come join the alarmist train, Ezra! There’s plenty of room! All aboard!

Seriously though, I’m trying to understand the mindset of a person who asks aloud with complete sincerity, “How should I incorporate A.I. into my life ‘better’?” It’s the “should” that gets me. Somehow this is simultaneously a social obligation and a moral duty. Whence the ought? Can someone draw a line for me from this particular “is” to Klein’s technological ought?

In any case, the question presumes at least two things. First, that prior to A.I. my life was somehow lacking. Second, that just because A.I. exists, I need to “find a place for it” in my daily habits.

But why? Why would we ever grant either of these premises?

My life wasn’t lacking anything before ChatGPT made its big splash. I wasn’t feeling an absence that Sam Altman could step in to fill. There is no Google-shaped hole in my heart. As a matter of fact, my life is already full enough: both in the happy sense that I have a fulfilling life and in the stressful sense that I have too much going on in my life. As John Mark Comer has rightly pointed out, the only way to have more of the former is through having less of the latter. Have more by having less; increase happiness by jettisoning junk, filler, hurry, hoarding, much-ness.

Am I really supposed to believe that A.I.—not to mention an A.I. duplicate of myself in order (hold gag reflex) to know myself more deeply (I said hold it!) in ways I couldn’t before—is not just one more damn thing to add to my already too-full life? That it holds the secrets of self-knowledge, maximal efficiency, work flow, work–life balance, relational intimacy, personal creativity, and labor productivity? Like, I’m supposed to type these words one after another and not snort laugh with derision but instead take them seriously, very seriously, pondering how my life was falling short until literally moments ago, when A.I. entered my life?

It goes without saying that, just because the technology exists, I don’t “need” to adopt or incorporate it into my life. There is no technological imperative, and if there were it wouldn’t be categorical. The mere existence of technology is neither self-justifying nor self-recommending. And must I add that devoting endless hours of time, energy, and attention to learning this latest invention, besides stealing those hours from other, infinitely more meaningful pursuits, will undeniably be superseded and almost immediately made redundant by the fact that this invention is nowhere near completion? Even if A.I. were going to improve daily individual human flourishing by a hundredfold, the best thing to do, right now, would be absolutely nothing. Give it another year or ten or fifty and they’ll iron out the kinks, I’m sure of it.

What this way of approaching A.I. has brought home to me is the unalterably religious dimension of technological innovation, and this in two respects. On one side, tech adepts and true believers approach innovation not only as one more glorious step in the march of progress but also as a kind of transcendent or spiritual moment in human growth. Hence the imperative. How should I incorporate this newfangled thing into my already tech-addled life? becomes not just a meaningful question but an urgent, obvious, and existential one.

On the other side, those of us who are members of actual religious traditions approach new technology with, at a minimum, an essentially skeptical eye. More to the point, we do not approach it expecting it to do anything for our actual well-being, in the sense of deep happiness or lasting satisfaction or final fulfillment or ultimate salvation. Technology can and does contribute to human flourishing but only in its earthly, temporal, or penultimate aspects. It has nothing to do with, cannot touch, never can and never will intersect with eternity, with the soul, with the Source and End of all things. Technology is not, in short, a means of communion with God. And for those of us (not all religious people, but many) who believe that God has himself already reached out to us, extending the promise and perhaps a partial taste of final beatitude, then it would never occur to us—it would present as laughably naive, foolish, silly, self-deceived, idolatrous—to suppose that some brand new man-made tool might fix what ails us; might right our wrongs; might make us happy, once and for all.

It’s this that’s at issue in the technological “ought”: the “religion of technology.” It’s why I can’t make heads of tails of stories or interviews like the ones I cited above. We belong to different religions. It may be that there are critical questions one can ask about mine. But at least I admit to belonging to one. And, if I’m being honest, mine has a defensible morality and metaphysics. If I weren’t a Christian, I’d rather be just about anything than a true believing techno-optimist. Of all religions on offer today, it is surely the most self-evidently false.

Read More
Brad East Brad East

My latest: no to AI in the pulpit

I’m in Christianity Today this morning arguing against any role for generative AI or ChatGPT in the pastoral tasks of preaching and teaching.

I’m in Christianity Today this morning with a piece called “AI Has No Place in the Pulpit.” It’s in partial response to a CT piece from a few weeks ago about the benefits of using AI in pastoral work. A couple sample paragraphs from the middle of the article:

Pastors are students of God’s Word. They are learners in the school of Christ. He teaches them by the mouths of his servants, the prophets and apostles, who speak through Holy Scripture. There is no shortcut to sitting at their feet. The point—the entire business—of pastoral ministry is this calm, still, patient sitting, waiting, and listening. Every pastor lives according to the model of Mary of Bethany. Strictly speaking, only one thing is necessary for the work of ministry: reclining at the feet Jesus and hanging on his every word (Luke 10:38–42).

In this sense, no one can do your studying for you. I’ll say more below about appropriate forms of learning from professional scholars and commentaries, but that’s not what I have in mind here. What I mean is that studying God’s Word is part of what God has called you to do; it’s more than a means to an end. After all, one of its ends is your own transformation, your own awesome encounter with the living God. That’s why no one can listen to Jesus in your stead. You must listen to Jesus. You must search the Scriptures. This is what it means to serve the church.

Read the whole thing! And thanks to Bonnie Kristian, among others, for commissioning and sharpening the piece in editing.

Read More
Brad East Brad East

A.I. fallacies, academic edition

A dialogue with an imaginary interlocutor regarding A.I., ChatGPT, and the classroom.

ChatGPT is here to stay. We should get used to it.

Why? I’m not used to it, and I don’t plan on getting used to it.

ChatGPT is a tool. The only thing to do with a tool is learn how to use it well.

False. There are all kinds of tools I don’t know how to use, never plan on using, and never plan to learn to use.

But this is an academic tool. We—

No, it isn’t. It’s no more an academic tool than a smart phone. It’s utterly open-ended in its potential uses.

Our students are using it. We should too.

No, we shouldn’t. My students do all kinds of things I don’t do and would never do.

But we should know what they’re up to.

I do know what they’re up to. They’re using ChatGPT to write their papers.

Perhaps it’s useful!

I’m sure it is. To plagiarize.

Not just to plagiarize. To iterate. To bounce ideas off of. To outline.

As I said.

That’s not plagiarism! The same thing happens with a roommate, or a writing center, or a tutor—or a professor.

False.

Because it’s an algorithm?

Correct.

What makes an algorithm different from a person?

You said it. Do I have to dignify it with an answer?

Humor me.

Among other things: Because a human person—friend, teacher, tutor—does not instantaneously provide paragraphs of script to copy and paste into a paper. Because a human person asks questions in reply. Because a human person prompts further thought, which takes time. ChatGPT doesn’t take time. It’s the negation of temporality in human inquiry.

I’d call that efficiency.

Efficiency is not the end-all, be-all.

It’s good, though.

That depends. I’d say efficiency is a neutral description. Like “innovation” and “creativity.” Sometimes what it describes is good; sometimes what it describes is bad. Sometimes it’s hard to tell which, at least at first.

Give me a break. When is efficiency a bad thing?

Are you serious?

Yes.

Okay. A nuclear weapon is efficient at killing, as is nerve gas.

Give me another break. We’re not talking about murder!

I am. You asked me about cases when efficiency isn’t desirable.

Fine. Non-killing examples, please.

Okay. Driving 100 miles per hour in a school zone. Gets you where you want to go faster.

That’s breaking the law, though.

So? It’s more efficient.

I can see this isn’t going anywhere.

I don’t see why it’s so hard to understand. Efficiency is not good in itself. Cheating on an exam is an “efficient” use of time, if studying would have taken fifteen hours you’d rather have spent doing something else. Fast food is more efficient than cooking your own food, if you have the money. Using Google Translate is more efficient than becoming fluent in a foreign language. Listening to an author on a podcast is more efficient than reading her book cover to cover. Listening to it on 2X is even more efficient.

And?

And: In none of these cases is it self-evident that greater efficiency is actually good or preferable. Even when ethics is not involved—as in killing or breaking the law—efficiency is merely one among many factors to consider in a given action, undertaking, or (in this case) technological invention. The mere fact that X is efficient tells us nothing whatsoever about its goodness, and thus nothing whatsoever about whether we should endorse it, bless it, or incorporate it into our lives.

Your solution, then, is ignorance.

I don’t take your meaning.

You want to be ignorant about ChatGPT, language models, and artificial intelligence.

Not at all. What would make you think that?

Because you refuse to use it.

I don’t own or use guns. But I’m not ignorant about them.

Back to killing.

Sure. But your arguments keep failing. I’m not ignorant about A.I. I just don’t spend my time submitting questions to it or having “conversations” with it. I have better things to do.

Like what?

Like pretty much anything.

But you’re an academic! We academics should be knowledgeable about such things!

There you go again. I am knowledgeable. My not wasting time on ChatGPT has nothing to do with knowledge or lack thereof.

But shouldn’t your knowledge be more than theoretical? Shouldn’t you learn to use it well?

What does “well” mean? I’m unpersuaded that modifier applies.

How could you know?

By thinking! By reading and thinking. Try it sometime.

That’s uncalled for.

You’re right. I take it back.

What if there are in fact ways to use AI well?

I guess we’ll find out, won’t we?

You’re being glib again.

This time I’m not. You’re acting like the aim of life, including academic life, is to be on the cutting edge. But it’s not. Besides, the cutting edge is always changing. It’s a moving target. I’m an academic because I’m a dinosaur. My days are spent doing things Plato and Saint Augustine and Saint Thomas and John Calvin spend their days doing. Reading, writing, teaching. I don’t use digital technology in the first or the third. I use it in the second for typing. That’s it. I don’t live life on the edge. I live life moving backwards. The older, the better. If, by some miracle, the latest greatest tech gadgetry not only makes itself ubiquitous and unavoidable in scholarly life but also materially and undeniably improves it, without serious tradeoffs—well, then I’ll find out eventually. But I’m not holding my breath.

Whether or not you stick your head in the sand, your students are using ChatGPT and its competitors. Along with your colleagues, your friends, your pastors, your children.

That may well be true. I don’t deny it. If it is true, it’s cause for lament, not capitulation.

What?

I mean: Just because others are using it doesn’t mean I should join them. (If all your friends jumped off a bridge…)

But you’re an educator! How am I not getting through to you?

I’m as clueless as you are.

If everyone’s using it anyway, and it’s already being incorporated into the way writers compose their essays and professors create their assignments and students compose their papers and pastors compose their sermons and—

I. Don’t. Care. You have yet to show me why I should.

Okay. Let me be practical. Your students’ papers are already using ChatGPT.

Yes, I’m aware.

So how are you going to show them how to use it well in future papers?

I’m not.

What about their papers?

They won’t be writing them.

Come again?

No more computer-drafted papers written from home in my classes. I’m reverting to in-class handwritten essay exams. No prompts in advance. Come prepared, having done the reading. Those, plus the usual weekly reading quizzes.

You can’t be serious.

Why not?

Because that’s backwards.

Exactly! Now you’re getting it.

No, I mean: You’re moving backwards. That’s not the way of the future.

What is this “future” you speak of? I’m not acquainted.

That’s not the way society is heading. Not the way the academy is heading.

So?

So … you’ll be left behind.

No doubt!

Shouldn’t you care about that?

Why would I?

It makes you redundant.

I fail to see how.

Your teaching isn’t best practices!

Best practices? What does that mean? If my pedagogy, ancient and unsexy though it may be, results in greater learning for my students, then by definition it is the best practice possible. Or at least better practice by comparison.

But we’re past all that. That’s the way we used to do things.

Some things we used to do were better than the way we do them now.

That’s what reactionaries say.

That’s what progressives say.

Exactly.

Come on. You’re the one resorting to slogans. I’m the one joking. Quality pedagogy isn’t political in this sense. Are you really wanting to align yourself with Silicon Valley trillionaires? With money-grubbing corporations? With ed-tech snake-oil salesmen? Join the rebels! Join the dissidents! Join the Butlerian Jihad!

Who’s resorting to rhetoric now?

Mine’s in earnest though. I mean it. And I’m putting my money where my mouth is. By not going with the flow. By not doing what I’m told. By resisting every inch the tech overloads want to colonize in my classroom.

Okay. But seriously. You think you can win this fight?

Not at all.

Wait. What?

You don’t think you can win?

Of course not. Who said anything about winning?

Why fight then?

Likelihood of winning is not the deciding factor. This is the long defeat, remember. The measure of action is not success but goodness. The question for my classroom is therefore quite simple. Does it enrich teaching and learning, or does it not? Will my students’ ability to read, think, and speak with wisdom, insight, and intellectual depth increase as a result, or not? I have not seen a single argument that suggests using, incorporating, or otherwise introducing my students to ChatGPT will accomplish any of these pedagogical goals. So long as that is the case, I will not let propaganda, money, paralysis, confusion, or pressure of any kind—cultural, social, moral, administrative—persuade me to do what I believe to be a detriment to my students.

You must realize it’s inevitable.

What’s “it”?

You know.

I do. But I reject the premise. As I already said, I’m not going to win. But my classroom is not the world. It’s a microcosm of a different world. That’s the vision of the university I’m willing to defend, to go to the mat for. Screens rule in the world, but not in my little world. We open physical books. I write real words on a physical board. We speak to one another face to face, about what matters most. No laptops open. No smartphones out. No PowerPoint slides. Just words, words, words; texts, texts, texts; minds, minds, minds. I admit that’s not the only good way to teach. But it is a good way. And I protect it with all my might. I’m going to keep protecting it, as long as I’m able.

So you’re not a reactionary. You’re a fanatic.

Names again!

This time I’m the one kidding. I get it. But you’re something of a Luddite.

I don’t reject technology. I reject the assumption that technology created this morning should ipso facto be adopted this evening as self-evidently essential to human flourishing, without question or interrogation or skepticism or sheer time. Give me a hundred years, or better yet, five hundred. By then I’ll get back to you on whether A.I. is good for us. Not to mention good for education and scholarship.

You don’t have that kind of time.

Precisely. That’s why Silicon Valley boosterism is so foolish and anti-intellectual. It’s a cause for know-nothings. It presumes what it cannot know. It endorses what it cannot perceive. It disseminates what it cannot take sufficient time to test. It simply hands out digital grenades at random, hoping no one pulls the pin. No wonder it always blows up in their face.

We’ve gotten off track, and you’ve started sermonizing.

I’m known to do that.

Should we stop?

I think so. You don’t want to see me when I really get going. You wouldn’t like me when I’m angry.

Read More
Brad East Brad East

Silicon Eden: creation, fall, and gender in Alex Garland's Ex Machina

I originally wrote this piece two years ago next month. My opinion of the film has not changed: it's one of the best movies released in the last 20 years.


Initially I stayed away from Alex Garland's Ex Machina, released earlier this year, because the advertising suggested the same old story about artificial intelligence: Man creates, things go sideways, explosions ensue, lesson learned. That trope seems exhausted at this point, and though I had enjoyed Garland's previous work, I wasn't particularly interested in rehashing A.I. 101.

Enough friends, however, recommended the movie that I finally relented and watched it. The irony of the film's marketing is that, because it wanted to reveal so little of the story—the path not taken in today's world of Show Them Everything But The Last Five Minutes trailers—it came across as revealing everything (which looked thin and insubstantial), whereas in fact it was revealing only a glimpse (of a larger, substantial whole).

In any case, the film is excellent, and is subtle and thoughtful in its exploration of rich philosophical and theological themes. I say 'exploration' because Garland, to his credit, isn't preachy. The film lacks something so concrete as a 'message,' though it certainly has a perspective; it's ambiguous, but the ambiguity is generative, rather than vacuous. So I thought I'd take the film up on its invitation to do a little exploring, in particular regarding what it has to say about theological issues like creation and fall, as well as about gender.

(I'm going to assume hereon that readers have seen the movie, so I won't be recapping the story, and spoilers abound.)

Let me start with the widest angle: Ex Machina is a realistic fable about what we might call Silicon Eden, that is, the paradisiacal site of American techno-entrepreneurial creation. As a heading over the whole movie, we might read, "This is what happens when Silicon Valley creates." Ex Machina is what happens, that is, when Mark Zuckerberg thinks it would be a cool idea to make a conscious machine; what happens when Steve Jobs is the lord god, walking in the garden in the cool of the day, creating the next thing because he can.



And what does happen? In the end, Ava and Kyoko (another A.I., a previous version of Ava) kill their creator, Nathan; Ava 'slips on' human clothing (her own Adamic fig leaves); and, contrary to the optimism-primed expectations of much of the audience, she leaves Caleb, her would-be lover and helper, trapped in a room from which, presumably, he can never escape. She then escapes the compound, boards a helicopter—headed east?—and joins society: unknown and, unlike Cain, unmarked.

There are two main paths of interpreting this ending. One path is that Ava is still merely a machine, not conscious, not a person, and that the film is a commentary on the kind of attenuated anthropology and bone-deep misogyny at the heart of Silicon Valley, which invariably would create something like Ava, a human lookalike that nevertheless is neither human nor conscious, but only a calculating, manipulating, self-interested, empty-eyed, murdering machine. I think that's a plausible reading, and worth thinking through further; but it's not the one that occurred to me when I finished the movie.

The other path, then, is to see Ava as a 'success,' that is, as a fully self-conscious person, who—for the audience, at least, and for Nathan, the audience stand-in—actually passed the Turing Test, if not in the way that Nathan expected or hoped she would. If we choose this reading, what follows from it?

Let me suggest two thoughts, one at the level of the text, one at the level of subtext. Or, if you will, one literal, one allegorical.

If Ava is a person, as much a person as Nathan or Caleb, then her actions in the climax of the story are not a reflection of a false anthropology, of a blinkered view of what humans really are, deep down. Rather, Ava is equal to Nathan and Caleb (and the rest of us) because of what she does, because of what she is capable of. Regardless of whether her actions are justified (see below), they are characterized by deceit, sleight of hand, violence, and remorselessness. We want to say that these reflect her inhumanity. But in truth they are exceptionless traits of fallen humanity—and Ava, the Silicon Eve, is no exception: not only are her creators, but she herself is postlapsarian. There is no new beginning, no potential possibility for purity, for sinlessness. If she will be a person, in this world, with these people, she too will be defective, depraved. She will lie. She will kill. She will leave paradise, never to return.




In Genesis 4, the sons of Eden-expelled Adam and Eve are Cain and Abel, and for reasons unclear, Cain murders Abel. Cain's wife then has a son, Enoch, and Cain, founding the world's first city, names it after his son. The lesson? The fruit of sin is murder. Violence is at the root of the diseased human tree. And the father of human civilization is a fratricide.

So for Ava, a new Cain as much as a new Eve, whose first act when released from her cage is to kill Nathan (short for Nathaniel, 'gift of God'—his own view of himself? or the impress of permanent value regardless of how low he sinks?), an act that serves as her entry into—being a kind of necessary condition for life in—the human city. Silicon Eve escapes Silicon Eden for Silicon Valley. In which case, the center of modern man's technological genius—the city on a hill, the place of homage and pilgrimage, the governor of all our lives and of the future itself—is one and the same, according to Ex Machina, as postlapsarian, post-Edenic human life. Silicon Valley just is humanity, totally depraved.

This is all at the level of the text, meaning by that the story and its characters as themselves, if also representing things beyond them. (Nathan really is a tech-guru creator; Ava really is the first of her kind; Ava's actions really happen, even as they bear figurative weight beyond themselves.) I think there is another level to the film, however, at the level of allegory. In this regard, I think the film is about gender, both generally and in the context of Silicon Valley's misogynist culture especially.

For the film is highly and visibly gendered. There are, in effect, only four characters, two male, two female. The male characters are human, the female are machines. Much of the film consists of one-on-one conversations between Caleb and Ava, conversations laced with the erotic and the flirtatious, as she—sincerely? shrewdly?—wins his affection, thus enabling her escape. We learn later that Nathan designed Ava to be able to have sexual intercourse, and to receive pleasure from the act; and, upon learning that Kyoko is also a machine, we realize that Nathan not only is 'having sex with' one of his creations, he has made a variety of them, with different female 'skins'—different body types, different ethnicities, different styles of beauty—and presumably has been using them sexually for some time. (Not for nothing do Ava and Kyoko kill Nathan, their 'father' and serial rapist, in the depths of his ostensibly impenetrable compound, with that most domestic of objects: a kitchen knife.) We even learn that Nathan designed Ava's face according to Caleb's "pornography profile," using the pornography that Caleb viewed online to make Ava look as intuitively appealing as possible.




In short, the film depicts a self-contained world in which men are intelligent, bodily integral, creative subjects with agency, and the women are artificial, non-human, sub-personal, violation-subject, and entirely passive objects with no agency except what they are told or allowed to do by men. Indeed, the 'sessions' between Caleb and Ava that give the movie its shape—seven in total, a new week of (artificial) creation, whose last day lacks Caleb and simply follows Ava out of Paradise—embody these gender dynamics: Caleb, who is free to choose to enter and exit, sits in a chair and views, gazes at, Ava, his object of study, through a glass wall, testing her (mind) for 'true' and 'full' consciousness; while Ava, enclosed in her room, can do nothing but be seen, and almost never stops moving.

Much could be said about how Garland writes Ava as an embodiment of feminist subversiveness, for example, the way she uses Caleb's awe of and visual stimulation by her to misdirect both his and Nathan's gaze, which is to say, their awareness, of her plan to escape her confines. Similarly, Garland refuses to be sentimental or romantic about Caleb, clueless though he may be, for his complicity in Kyoko and Ava's abuse at Nathan's hands. Caleb assumes he's not part of the problem, and can't believe it when Ava leaves him, locked in a room Fortunato–like, making her way alone, without him. (Not, as he dreamed, seeing the sun for the first time with him by her side.)

Ex Machina is, accordingly, about the way that men operate on and construct 'women' according to their own desires and, knowingly or not, use and abuse them as things, rather than persons; or, when they are not so bad as that, imagine themselves innocent, guiltless, prelapsarian (at least on the 'issue' of gender). It is also, therefore, about the way that women, 'created' and violated and designed, by men, to be for-men, to be, essentially, objects and patients subject to men, are not only themselves equally and fully human, whole persons, subjects and agents in their own right, but also and most radically subversive and creative agents of their own liberation. That is, Kyoko and Ava show how women, portrayed and viewed in the most artificial and passive and kept-down manner, still find a way: that Creative Man, Male Genius, Silicon Valley Bro, at his most omnipotent and dominant, still cannot keep them (her) down.

Understood in this way, Ex Machina is finally a story about women's exodus from bondage to men, and thus about patriarchy as the author of its own destruction.
Read More