Resident Theologian
About the Blog
My latest: a plea to teach college students about God, in The Raised Hand
A link to my essay answering the question: “What does every university and college student need to learn?”
The Raised Hand is a Substack run by the Consortium of Christian Study Centers and edited by Daniel G. Hummel of the Lumen Center (Madison, WI) and Upper House (serving the University of Wisconsin-Madison). This school year they’ve been running monthly essays written by Christian academics asked to respond to the following prompt: “What does every university and college student need to learn?”
Yesterday they published my entry, titled “The Knowledge of God.” Here’s how it opens:
I am tempted to begin by saying that the first thing every university and college student needs to learn is how to read. But I’ve written about that plenty elsewhere, and you can’t throw a stone on the internet without hitting someone writing about the crisis of literacy on campus and in the public schools. Since I’m a theologian, moreover, there’s some low-hanging fruit (no pun intended) just waiting for me to reach up and take it.
Here's my real answer: If learning is about knowing, then every college student needs, through teaching, to come to know God. Another way to say this is that every student needs to learn how to pray.
Click here to read the whole thing. Thanks to Daniel for the invitation. And thanks to Sara Hendren, who already read and kindly boosted the piece. It was a fun one to write. Watch for a follow-up podcast conversation (sometime in the next week or two) that discusses the essay, also hosted by The Raised Hand.
My latest: on the late Albert Borgmann, in HHR
A link to my essay on the life and writings of the philosopher Albert Borgmann.
This morning The Hedgehog Review published an essay of mine called “The Gift of Reality.” It’s an extended introduction to and exposition of the life and writings of the late Albert Borgmann, including a review of his last book, published posthumously last January. Here’s a sample paragraph from the middle of the piece:
At the same time, while Borgmann may have been a critic of liberalism, he argued that “it should be corrected and completed rather than abandoned.” In this he reads as a less polemical Christopher Lasch or Wendell Berry, fellow democrats whose political vision—consisting among other things of family, fidelity, fortitude, piety, honor, honest work, local community, neighborliness, and thrift—is likewise invested in preserving and respecting reality. Such a vision is simultaneously homeless on the national stage and the richest fruit of the American political tradition.
“Name five things they got wrong”
Epistemic systems, master theories, principled fallibilism, and politics.
When I first hear or meet scholars committed to a comprehensive intellectual system, usually named after a person, I like to ask them a simple question (sometimes I do actually ask it, sometimes it’s only in my head): “Name five things they got wrong.” In other words, if you’re a Thomist or an Augustinian, a Rawlsian or a Schmittian, before I hear anything else about what you think, much less how your system solves my problems and answers all my questions, I want to hear how its namesake was fallible. If no answer is forthcoming, then I’m going to have a hard time taking anything else you say seriously.
I once heard John Hare, a committed Kantian and distinguished professor of philosophy at Yale (and, full disclosure, one of my own teachers), give a talk at a conference in which he began by listing four serious ways in which Kant erred in his philosophy. The rhetorical effect was powerful. It put the audience in a particular place: no longer were we preparing, perhaps reluctantly or suspiciously, to hear a zealous prophet of the Master deliver a timeless Word from the Inerrant One; we were now eager to learn from a fellow co-seeker of the truth, who happened to have found in Kant a useful aid for the journey.
The practice isn’t only rhetorically useful; it helps devotees of systems more generally, I think. “Calvin didn’t get everything right” is a kind of spiritual and mental hygiene for Reformed theologians. Call it an epistemic cleanse. All thinkers and writers should agree to this basic practice, or principle, but certainly Christians who believe in original sin should be the first to sign up for it.
The principle applies elsewhere, too. A Christian who says “My politics are exactly those of the DNC/GOP, plus Jesus” should immediately arouse suspicion, all the more so if the party advocate argues either that such politics are coterminous with Christian ethics or that this fact is obvious (or both). Whenever I meet Christians who are committed members of an American political party, the first thing I want to hear out of their mouths is, “Granted, I disagree with the platform on X, Y, and Z,” followed by, “but on the whole, I think the common good would be served best with them in power rather than the other guys.” That’s a perfectly reasonable position and can be defended variously and coherently. What can’t be defended is, “The ever-evolving platform of my party lacks any flaws,” much less, “My party’s platform is God’s political will written—a fact that should plain as day to any open-eyed believer.”
Now that’s nonsense on stilts, and anybody who’s not already a political hack can admit it. Epistemic humility is a tonic in politics as it is everywhere else. And in terms of honesty with oneself as well as the capacity to persuade others, there’s no substitute for principled and explicit fallibilism.
Expertise
Six principles about expertise and credentials, pushing back against some of the alarm today that they are under attack.
Expertise is under attack is a common theme in journalism and academic writing today. I don’t doubt it, and I don’t doubt its importance. Expertise is real and the loss of public confidence in persons whose office, education, training, or experience have historically granted them some measure of authority is an all too real problem. Implicit distrust of the very notion of authority, the very suggestion of expertise, makes a common life impossible, in more ways than one.
But there is a fundamental misunderstanding in defenses of expertise, and not only in high-minded venues. Even at the ordinary level of daily life or—where I spend my days—in the academy and the classroom, there is a basic confusion regarding what expertise is, what credentials are, and how either ought to function in social relations.
I wrote about this at length in my essay for The New Atlantis a year ago, titled “Statistics as Storytelling.” I won’t rehash that argument here. Let me do my best to boil it down into its basic components. I’ll spell it out in six principles.
The first principle of expertise is a defined scope of competence. An “expert”—if I’m honest, I hate that term; it’s a weasel word, invariably used to enhance status or dismiss objections; but I’ll keep using it here, since it’s the one in circulation—possesses some relevant knowledge about a particular domain: embryology, archeology, Greek sculpture, Moby-Dick. If the Melville scholar comments on anything outside her expertise, therefore, she is by definition no longer an expert, and thus bears no authority worthy of deference or respect. This is the Richard Dawkins phenomenon: He is welcome to speak and to write about philosophy and theology, but he does not do so as a philosopher or theologian, but as an evolutionary biologist addressing questions and subjects outside the scope of his formal training.
The second principle of expertise is that, without exception, all members of the same field, whether delimited by discipline or study or practice or training, disagree with one another about matters crucial to the field to which they belong. Expertise, in other words, is not about unanimity or agreement; it is about membership in a group defined by disagreement and disputation. It is about being party to the contest that is the field; being part of the argument that constitutes the guild. Expertise is not consensus: it’s the very opposite. It’s the entry point into a world of bitter, sometimes rancorous, conflict.
That doesn’t mean that everything is thereby in question. One must agree about certain things to disagree about others. Intelligible disagreement presupposes prior agreement. 2 + 2 = 4 is a premise for mathematicians’ arguments; certain claims build on others. That’s true of every realm of knowledge. But the interesting thing is always what’s not agreed upon. And outsiders are always surprised by just how little is agreed upon, even by like-minded experts in the same field.
The third principle of expertise is that, whenever and wherever what is called for in a given moment or in response to a certain question is not a set of empirical facts but a judgment, then the presumptive force of expertise is immediately qualified. There is no such thing as expertise in judgment. Or rather, there is, but one cannot be credentialed in it, for its name is wisdom. Wisdom is not and cannot be the result of formal education. It does not come with a degree or diploma. There are no letters to append to your name that signify wisdom. The least learned or educated person in the world may be wise, and the smartest or most educated person in the world may be foolish. (Indeed, Christians say that’s the normal run of things.) Good sense comes from living. Prudence is a virtue. Neither is the domain of an expert. There are no experts in good judgment, in wisdom, in prudence. As often as not, expertise functions as an obstacle to it, or a shield from it.
The fourth principle of expertise, then, is that typically what expertise provides is a set of facts or conditions, sometimes necessary but never sufficient, for the possibility of exercising wise judgment. It is true that I know more about Christian theology than most believers in the pews. That does not, in any way, mean that I am more likely to be right than they are about this or that Christian doctrine. A monk of Mount Athos is far wiser to submit to Orthodox tradition than to listen to me, even if I’ve read more Orthodox theologians than he has. A lifelong elderly believer who has never read theology may have keener insight into the mystery of the Eucharist than I do. True, I know the date of the second Ecumenical Council, and she may not. That’s not at issue though. What’s at issue is whether my expertise, such as it is, is either necessary or sufficient for knowing sound doctrine. And it is not. (If you’d like to meet a passel of heretical PhDs in theology, I can arrange an introduction.)
The same goes for biblical scholars. Knowledge of Greek gives you a leg up on having some plausible sense of what St. Paul might have had in mind in the mid-50s, writing to Corinth. But it doesn’t ensure that your exegesis of any New Testament text will be right, or even more likely to be right than the exegesis of an ordinary believer in the pew, ignorant of Greek as well as first-century Greco-Roman culture. Why? First, because New Testament scholars themselves don’t agree about how to read the text. The Pauline guild is that group of experts than which there is no more cantankerous or quarrelsome. Second, because the New Testament is Holy Scripture, and what God has kept from the learned he has revealed to the simple. That is, what God has to say in and through the canon may just as well bypass the intricacies of academic method as be accessed by them. In my experience, that is often the case.
The fifth principle of expertise is that all fields or domains that presuppose or assert normative (rather than empirical) claims logically may and necessarily will come into conflict. This is usually most quickly revealed in anthropology. An economist supposes homo sapiens to be a utility-maximizer, say, while the therapist sees a self-actualizer, and the theologian a sinner in need of Christ. To be sure, some aspects of these visions might be harmonious. But not all. Each, for example, takes a different and mutually opposed view of desire. Are all desires good? Are all to be affirmed or fulfilled? Is desire as such self-validating? And so on. The theologian is not departing from her realm in contesting the claims of the economist or the therapist, for the ground being contested is common to the three of them. It concerns the nature and purpose of the human person. Hence, when areas of expertise overlap, it is wholly proper for argument to ensue. No one’s view is invalidated in advance by dint of lacking the relevant credentials.
The sixth principle of expertise is that sometimes experts are wrong. It may be some group of experts, or all of them. The error may be partial or complete. But experts are wrong, and in fact, regularly so. That is to be expected, since there are no angelic experts, only human ones. The practice of knowledge is just that: a practice, and so subject to all the ordinary human foibles: vanity, greed, oversight, shortsightedness, limitations of every kind, fallibility, haste, contempt, and the rest. Sometimes we want something to be true when it isn’t. Sometimes we wish something were good when it isn’t. Sometimes we can’t stand the thought that our enemy isn’t wrong, and we work overtime to show that he is, or might be. Sometimes our blinders—the products of inheritance, culture, genetics, generation, education, prejudice, peers, parents, friends, what have you—keep us from seeing what is right in front of our noses. Whatever the reason, experts are far from infallible. The one thing you can take to the bank is that every expert in every field at this present moment believes something profoundly wrong or untrue in relation to his or her field, not to mention other fields. That includes me. The problem is just that none of us knows which one of our beliefs is the wrong one, amid all the right ones.
For experts of all kinds, the upshot should be a severe and sincere humility about the range and competence of our knowledge. For normal folks, such humility should be the expectation of experts, not the exception; and when it isn’t present, they are not wrong to be skeptical.
“Arbitrary”
A lexical nitpick with some contemporary public-facing writing by journalists and academics.
A lexical nitpick.
In contemporary journalistic as well as postmodern academic writing, and various intersections thereof, there is a habit of using the word “arbitrary” to describe what is anything but. In its usage, it is intended to denote or connote something both random and irrational. But invariably the referent is neither. There are always “reasons why,” and almost never are the reasons self-evidently wrong or bad. The author simply doesn’t accept them as persuasive. That doesn’t render the phenomenon arbitrary, though. It just makes it conventional, or arguable. But what isn’t?
I’ve seen this trend applied to weeding treatments in one’s lawn; to the distinction between men’s and women’s sports; to clothing style; to rules in a game; to questions of propriety or decorum in public spaces and in online images; to methods in biblical interpretation; and to much more besides. It’s a weird trick and usually a cheap one. It’s often unclear whether the author knows the inapplicable unfairness of the usage, and both options are bad: either dishonesty or superficiality.
There’s something else going on, too. Typically the author makes clear how magnificently aware he is of the social constructedness of everything in our common life. This, that, and the other thing is a social construction; ergo, it’s turtles all the way down; ergo, it’s all arbitrary anyway—nothing but a choice of arbitraries: ecco homo, behold the human condition.
But that is either the wrong conclusion to draw, or the author hasn’t followed it to its logical conclusion. If the former, then what he means is that social and cultural and political life is unavoidably and essentially contingent—and that is true. But contingency does not mean arbitrary. Whereas if is the latter, then absolute and irreducible arbitrariness as a feature of every aspect of reality entails that the author’s preference for (non-arbitrary) X over (arbitrary) Y is a nonstarter. He’s sawed off the branch he was sitting on. There’s no longer any argument to be had. In which case, he should drop the rhetorical gamesmanship and accept that his opponent’s position is no more or less arbitrary than his is.
Unless he already knows that, and is using words as a mere means to the end of getting what he, arbitrarily, wants. No inconsistency there, albeit at the price of reducing language to power and exalting the ego’s desires as final. The price, in other words, is nihilism as a social, political, and rhetorical philosophy. Which, if we’re honest, is sometimes what lies behind the public writing of academics and journalists today.
Ethics primer
There are two sets of fundamental distinctions in ethics. The first concerns the kind of ethics in view. The second concerns the difference between morality and other terms or concepts we are prone to confuse with morality.
There are two sets of fundamental distinctions in ethics. The first concerns the kind of ethics in view. By my count, there are four such:
First is descriptive ethics. This is, as the name suggests, ethics in a descriptive mode: it does not propose what is good or evil, what actions to pursue or avoid, but rather offers an account, meant to be accurate but not evaluative, of what individuals, groups, religions, or philosophies believe to be good or evil, etc.
Second is metaethics. This is a philosophical approach to ethics that takes a bird’s-eye view of the very task and concept of ethics, asking what is going on when we “do” ethics. If first-order ethics is the exercise of practical reason in real time on a daily basis by ordinary people, and if second-order ethics is critical rational reflection on the reasoning processes and resulting behaviors embodied in those daily habits of moral living, then metaethics is third-order ethics: critical rational reflection on what we’re up to when we engage in second-order reasoning about first-order living. Metaethics asks questions like, “What does the word ‘good’ have in common as between its use in, e.g., Thomist and Kantian discourses?” Or: “Is all second-order ethics ineluctably teleological?” So on and so forth.
Third is normative ethics. This is the second-order ethics mentioned above: critical rational reflection on what the good life consists in and what behaviors conduce to it. Put differently, normative ethics is prescriptive; it wants, at the end of its labors, to arrive at how you and I should live if we would be good persons. The mood or mode of normative ethics is the imperative (though not only the imperative): Thou shalt not murder, steal, lie, covet, and what not. Only rarely does anyone but academics do metaethics or descriptive ethics. More or less everyone does normative ethics, at least in terms of making appeals to concrete traditions of normative ethics on appropriate occasions: faced with a hard decision; helping a friend work through a problem; teaching a child how to behave; etc.
Fourth is professional ethics. This is the code of conduct or statute of behaviors proper to a particular profession, institution, job, business, or guild. It is a contingent set of recommendations for what makes a fitting or excellent member of said sphere: If you would practice law/medicine/whatever, then you may (not) do X, Y, Z … It is important to see that professional ethics is a derivative, secondary, and belated species of ethics. It is derivative because its principles stem from but are not synonymous with normative ethics. It is secondary because, when and where it requires actions that are (normatively) wrong or forbids actions that are (normatively) right, a person “bound” by professional ethics not only may but must transgress the lines drawn by his or her professional ethics, in service to the higher good required by normative ethics. By the same token, much of professional ethics consists of “best practices” that are neither moral nor immoral, but amoral. They aren’t, that is, about right or wrong in themselves, only about what it means to belong to this or that career or organization. Finally, professional ethics is belated in the sense that late modern capitalism generates byzantine bureaucracies beholden to professional ethics not as a useful, if loosely held, revolving definition of membership in a guild, but instead as hidebound labyrinths by which to protect said members from legal liability. In this way professional ethics partakes of a certain mystification, insofar as it suggests, by its language, that persons formed by its rules and principles will be good or virtuous in character, whereas in truth such persons are submitting to a form of ideological discipline that bears little, if any, relationship to the good in itself or what makes for virtuous character.
*
Having made these distinctions, we are in a position to move to a second set. The following distinctions concern the difference between morality (which is what ethics proper, or normative ethics, is about) and other terms or concepts we are prone to confuse with morality. By my count there are five such:
1. Morality and legality. This is the difference between what one ought to do and what one is permitted by law to do. So, e.g., it is morally wrong to cheat on your spouse, but in this country, at this time, adultery is not illegal. Or consider Jim Crow: “separate but equal” was legal for a time, but it was never moral. If a black person jumped into a public swimming pool full of white people, he did nothing wrong, even if the police had a legal pretext by which to apprehend or punish him.
2. Morality and freedom. This is the difference between what one ought to do and what one is capable of doing. E.g., when I ask student X whether it is morally permissible (or “ethical”) for student Y to cheat on an exam, eight times out of ten the answer is: “He can what he wants.” But that’s not the question. No one disputes that he, student Y, “can do what he wants.” I’m asking whether, if what he wants is to cheat on an exam, that action is a moral one, i.e., whether it is right or wrong.
3. Morality and convention. This is the difference between what one ought to do and what one’s community (family, culture, religion) presupposes one ought to do. If I ask, “Is it right for person A to perform action B?” and someone answers, “Well, that’s the sort of thing that’s done in the community to which person A belongs,” the question has not yet been answered. Cultural assumptions are just that: assumptions. They may or may not be right. Ancient Rome permitted the paterfamilias of a household to expose a newborn infant who was unwanted or somehow deemed to be defective. But infanticide is morally wrong, regardless of whether or not a particular culture has permitted, encouraged, and/or legalized it. That is why we are justified in judging the ancient Roman practice of exposure to be morally wrong, even though they could well have responded, “But that’s the sort of thing that’s done here by and among us.”
4. Morality and beliefs-about-morality. This is the difference between what one ought to do and what people think one ought to do. In other words, no one is morally infallible; each of us, at any one time, has and has had erroneous ethical beliefs. This is why, from childhood through adulthood and onward, to be human is to undergo a lifelong moral education. It is likewise why it is intelligible for someone, even in midlife or older, to say, “You know, I used to believe that [moral claim] too; but recently my mind was changed.” This distinction also makes clear that relativism is false. It is not morally right for a serial killer to murder, even if he genuinely believes it is good for him, the serial killer, to do so. It is wrong whatever he believes, because murder is objectively wrong. The truth of murder’s wrongness is independent of his, your, or my beliefs about murder. If it is wrong, it is wrong prior to and apart from your and my agreement with its wrongness—though it is certainly desirable for you and I to come to see that murder is objectively wrong, and not merely wrong if/because we believe it to be wrong.
5. Morality and behavior. This is the difference between what one ought to do and what people actually do. No one believes human beings to be morally perfect; further, no one believes human beings to be perfectly consistent in the application of their moral convictions. E.g., whether or not you would lie in such-and-such a situation does not (yet) answer whether or not it would be right to do so. My students regularly trip up on this distinction. I ask: “Would it be morally justified for you knowingly to kill an innocent person in order to save five innocent persons?” They say: “I guess I would, if I were in that situation.” But as we have seen, that isn’t an answer to my question. The question is not whether you or I would do anything at all, only whether the behavior in question is morally right/wrong. Jews, Christians, and Muslims are doubly committed to the importance of this distinction, between we believe that all human beings are sinners. Our moral compass is broken, and although we may do good deeds, our proclivity runs the other direction: to vanity, pride, selfishness, sloth, self-loathing, lust, envy, deceit, self-justification. If that belief about human sinfulness is true (and it is), then on principle we should never suppose that what anyone would do in a given situation, real or hypothetical, reveals the truth of what one ought to do. The latter question must be answered on other grounds entirely.
*
In my experience, these two sets of distinctions, if imbibed thoroughly or taught consistently, make a world of difference for students, Christians, and other persons of good will who are interested in understanding, pursuing, and deliberating on what makes for good, ethical, or moral human living. If we agreed on them in advance, we might even be able to have a meaningful conversation about contested ethical matters! Imagine that.
“X is not in the Bible”
In an annual course I teach on moral philosophy I assign a textbook that contains a chapter on X. The author of the textbook is an ethicist, and the ethics he seeks to present to his readers (imagined as college students) is general or universal ethics; though he doesn’t out himself as a Kantian, those with ears to hear spy it from the opening pages. In the chapter on X the author has a sidebar dedicated to religious, by which he means Christian, arguments about X.
In an annual course I teach on moral philosophy I assign a textbook that contains a chapter on X. The author of the textbook is an ethicist, and the ethics he seeks to present to his readers (imagined as college students) is general or universal ethics; though he doesn’t out himself as a Kantian, those with ears to hear spy it from the opening pages. In the chapter on X the author has a sidebar dedicated to religious, by which he means Christian, arguments about X. He observes blithely that the Bible doesn’t mention X, though he allow that one or two passages have sometimes been trotted out as containing implicit commentary on X. Accordingly, he deploys a few perfunctory historical-critical tropes (without citation, naturally) to show how and why the original canonical authors in their original cultural context could never have meant what contemporary readers of the text sometimes take them to mean with respect to X.
I always dedicate time in class to discuss this sidebar with students. It is a perfect encapsulation of the naive inanity of non-theological scholars commenting on Christian thought. So far as I can tell the author is utterly sincere. He really seems to think that Christian thought, whether moral or doctrinal, is reducible to explicit assertions in the Bible, double-checked and confirmed by historical critics to have been what the putative author(s) could have or likely would have meant by the words found in a given pericope.
I used to think this sort of stupidity was willful and malicious; I’ve come to see, however, that it is honest ignorance, albeit culpable in the extreme.
A few days ago I was reminded of this annual classroom discussion because I read an essay by a scholar I otherwise enjoy and regularly profit from, who used the exact same argument, almost identically formulated. And he really seems to have meant what he wrote. That is, he really seems to believe that if he—neither a Christian nor a theologian nor a scholar of religion not a religious person at all—cannot find mention of X in the Bible, then it follows as a matter of course that:
Christians have no convictions about X;
Christians are permitted no convictions about X, that is, convictions with a plausible claim to be Christian;
no Christian teaching about X exists, past or present; and
Christianity as such neither has, nor has ever had, nor is it possible in principle that Christianity might have (or have had), authoritative doctrinal teaching on X.
All this, because he, the erudite rando, finds zero results when he does a word search for “X” on Biblegateway.com.
So far as I can tell, this ignorance-cum-stupidity—wedded to an eager willingness to write in public on such matters with casual authority—is widespread among folks of his ilk. They are true believers, and what they truly believe in is their own uninformed ineptitude.
The answer to the riddle of what’s going on here is not complicated. Anti- or post-Christian scholars, writers, and intellectuals in this country who spurn theological (not to mention historical) learning—after all, we don’t offer college courses in alchemy or astrology either—are sincerely unaware that American evangelicalism in its populist form is not representative of historic Christianity. They don’t realize that the modernist–fundamentalist debate is itself a uniquely modern phenomenon, and thus bears little relationship either to what Christianity is or to what one would find in Christian writings from any period from the second century to the seventeenth. They don’t know what they don’t know, and they’re too incurious to find out.
Were they to look, they would discover that Christianity has a living body of teaching on any range of topics. They would discover that over the centuries Christianity has had a teaching office, whose ordained leaders speak with varying degrees of authority on matters of pressing interest, including moral questions. They would discover that, in its acute American form, radical biblicism—the notion that Christians have beliefs only about things the Bible addresses directly and clearly—is one or two centuries old at most. They would discover that, even then, said biblicism describes a vanishingly small minority of global Christianity today. They would discover that the modernism on offer in Protestant liberalism is but the mirror image of fundamentalism, and therefore that to ape claims like “X isn’t even in the Bible—QED,” even intended as secular critique of conservative Christians, is merely an own goal: all it reveals is one’s own historical and cultural parochialism and basic theological incomprehension. They would discover that the church has never read the Bible the way either fundamentalists or historical critics do, in which case the word-search proof-text slam-dunk operation is not only irrelevant; in light of exegetical and theological tradition, it is liable to induce little more than a suppressed snort laugh.
They would discover, in a word, that the Bible does contain teaching about X, because the Bible contains teaching about all things (you just have to know where to look, that is, how to read); that the church’s tradition likewise contains considerable and consistent teaching about X, as any afternoon in a library or quick Google search would reveal; that Christianity is a living, not a dead thing; that Christian moral doctrine did not fossilize with the final breath of the last apostle; that postwar American evangelicalism is not the center of any universe, much less the Christian church’s.
They would discover—rather than learning the hard way—that asking someone in a position to know before writing about something of which one is wholly ignorant is a wise and generally admirable habit. But then, owning the fundies is a lot harder to do if you treat them as adults worthy of respect. This way is much more fun.
It’s all just a game anyway, right?
Enns and eggs and common sense
. . . not only the practical politics, but the abstract philosophies of the modern world have had this queer twist [of being contrary to common sense]. Since the modern world began in the sixteenth century, nobody's system of philosophy has really corresponded to everybody's sense of reality: to what, if left to themselves, common men would call common sense. Each started with a paradox: a peculiar point of view demanding the sacrifice of what they would call a sane point of view. That is the one thing common to Hobbes and Hegel, to Kant and Bergson, to Berkeley and William James.
. . . not only the practical politics, but the abstract philosophies of the modern world have had this queer twist [of being contrary to common sense]. Since the modern world began in the sixteenth century, nobody's system of philosophy has really corresponded to everybody's sense of reality: to what, if left to themselves, common men would call common sense. Each started with a paradox: a peculiar point of view demanding the sacrifice of what they would call a sane point of view. That is the one thing common to Hobbes and Hegel, to Kant and Bergson, to Berkeley and William James. A man had to believe something that no normal man would believe, if it were suddenly propounded to his simplicity; as that law is above right, or right is outside reason, or things are only as we think them, or everything is relative to a reality that is not there. The modern philosopher claims, like a sort of confidence man, that if once we will grant him this, the rest will be easy; he will straighten out the world, if once he is allowed to give this one twist to the mind.
It will be understood that in these matters I speak as a fool; or, as our democratic cousins would say, a moron; anyhow as a man in the street; and the only object of this chapter is to show that the Thomist philosophy is nearer than most philosophies to the mind of the man in the street. I am not, like Father D'Arcy, whose admirable book on St. Thomas has illuminated many problems for me, a trained philosopher, acquainted with the technique of the trade. But I hope Father D'Arcy will forgive me if I take one example from his book, which exactly illustrates what I mean. He, being a trained philosopher, is naturally trained to put up with philosophers. Also, being a trained priest, he is naturally accustomed, not only to suffer fools gladly, but (what is sometimes even harder) to suffer clever people gladly. Above all, his wide reading in metaphysics has made him patient with clever people when they indulge in folly. The consequence is that he can write calmly and even blandly sentences like these. "A certain likeness can be detected between the aim and method of St. Thomas and those of Hegel. There are, however, also remarkable differences. For St. Thomas it is impossible that contradictories should exist together, and again reality and intelligibility correspond, but a thing must first be, to be intelligible."
Let the man in the street be forgiven, if he adds that the "remarkable difference" seems to him to be that St. Thomas was sane and Hegel was mad. The moron refuses to admit that Hegel can both exist and not exist; or that it can be possible to understand Hegel, if there is no Hegel to understand. Yet Father D'Arcy mentions this Hegelian paradox as if it were all in the day's work; and of course it is, if the work is reading all the modern philosophers as searchingly and sympathetically as he has done. And this is what I mean saying that all modern philosophy starts with a stumbling-block. It is surely not too much to say that there seems to be a twist, in saying that contraries are not incompatible; or that a thing can "be" intelligible and not as yet "be" at all.
Against all this the philosophy of St. Thomas stands founded on the universal common conviction that eggs are eggs. The Hegelian may say that an egg is really a hen, because it is a part of an endless process of Becoming; the Berkeleian may hold that poached eggs only exist as a dream exists; since it is quite as easy to call the dream the cause of the eggs as the eggs the cause of the dream; the Pragmatist may believe that we get the best out of scrambled eggs by forgetting that they ever were eggs, and only remembering the scramble. But no pupil of St. Thomas needs to addle his brains in order adequately to addle his eggs; to put his head at any peculiar angle in looking at eggs, or squinting at eggs, or winking the other eye in order to see a new simplification of eggs. The Thomist stands in the broad daylight of the brotherhood of men, in their common consciousness that eggs are not hens or dreams or mere practical assumptions; but things attested by the Authority of the Senses, which is from God.
Thus, even those who appreciate the metaphysical depth of Thomism in other matters have expressed surprise that he does not deal at all with what many now think the main metaphysical question; whether we can prove that the primary act of recognition of any reality is real. The answer is that St. Thomas recognised instantly, what so many modern sceptics have begun to suspect rather laboriously; that a man must either answer that question in the affirmative, or else never answer any question, never ask any question, never even exist intellectually, to answer or to ask. I suppose it is true in a sense that a man can be a fundamental sceptic, but he cannot be anything else: certainly not even a defender of fundamental scepticism. If a man feels that all the movements of his own mind are meaningless, then his mind is meaningless, and he is meaningless; and it does not mean anything to attempt to discover his meaning. Most fundamental sceptics appear to survive, because they are not consistently sceptical and not at all fundamental. They will first deny everything and then admit something, if for the sake of argument—or often rather of attack without argument. I saw an almost startling example of this essential frivolity in a professor of final scepticism, in a paper the other day. A man wrote to say that he accepted nothing but Solipsism, and added that he had often wondered it was not a more common philosophy. Now Solipsism simply means that a man believes in his own existence, but not in anybody or anything else. And it never struck this simple sophist, that if his philosophy was true, there obviously were no other philosophers to profess it.
To this question "Is there anything?" St. Thomas begins by answering "Yes"; if he began by answering "No", it would not be the beginning, but the end. That is what some of us call common sense. Either there is no philosophy, no philosophers, no thinkers, no thought, no anything; or else there is a real bridge between the mind and reality. But he is actually less exacting than many thinkers, much less so than most rationalist and materialist thinkers, as to what that first step involves; he is content, as we shall see, to say that it involves the recognition of Ens or Being as something definitely beyond ourselves. Ens is Ens: Eggs are eggs, and it is not tenable that all eggs were found in a mare's nest.
Needless to say, I am not so silly as to suggest that all the writings of St. Thomas are simple and straightforward; in the sense of being easy to understand. There are passages I do not in the least understand myself; there are passages that puzzle much more learned and logical philosophers than I am; there are passages about which the greatest Thomists still differ and dispute. But that is a question of a thing being hard to read or understand: not hard to accept when understood. That is a mere matter of "The Cat sat on the Mat" being written in Chinese characters: or "Mary had a Little Lamb" in Egyptian hieroglyphics. The only point I am stressing here is that Aquinas is almost always on the side of simplicity, and supports the ordinary man's acceptance of ordinary truisms. For instance, one of the most obscure passages, in my very inadequate judgment, is that in which he explains how the mind is certain of an external object and not merely of an impression of that object; and yet apparently reaches it through a concept, though not merely through an impression. But the only point here is that he does explain that the mind is certain of an external object. It is enough for this purpose that his conclusion is what is called the conclusion of common sense; that it is his purpose to justify common sense; even though he justifies it in a passage which happens to be one of rather uncommon subtlety. The problem of later philosophers is that their conclusion is as dark as their demonstration; or that they bring out a result of which the result is chaos.
—G. K. Chesterton, Saint Thomas Aquinas (1933), 119–123. Last week I was walking home from work, listening to this book on audio, and when the narrator read the bolded portion above about Hegel, I yelped aloud, then had to stop in the middle of the street because I was laughing so hard.
Listening to Lewis
One of my goals for 2021 was to listen to fewer podcasts and more audiobooks—a double good, that. One of my strategies was to find novels and nonfiction on the shorter side, to gain some momentum and feel like I was making it through actual books rather than slogging through interminable chapters.
One of my goals for 2021 was to listen to fewer podcasts and more audiobooks—a double good, that. One of my strategies was to find novels and nonfiction on the shorter side, to gain some momentum and feel like I was making it through actual books rather than slogging through interminable chapters. One successful tack I happened upon was listening to classic shorter works of Christian thought I’d first read in my teens, specifically authors like Chesterton and Lewis. Of the latter’s books, I’ve “reread,” i.e. listened to, The Great Divorce, The Abolition of Man, The Problem of Pain, Reflections on the Psalms, Letters to Malcolm, The Weight of Glory, and Miracles.
I read most of these books in early high school. That means it’s been 20 years since I’ve opened their pages (though I’ve reread some of them since then, like The Great Divorce and The Screwtape Letters). During that interim I spent 13 years earning multiple degrees in biblical, religious, and theological studies, and am now in my fifth year teaching theology to undergraduates, in between publishing my first and second books. In other words, though one never “arrives” in the realm of theology, unlike my 16-year old self, I do know one or two things about the topic now. I have, as the kids say, done the reading.
What have I made of Lewis on this side of that span, then? Before answering that question I have to address another matter. That matter is Lewis’s own stature, within the theological academy and without. There is nothing—and I do mean nothing—more plebeian, rustic, and déclassé in American scholarly theological writing, at least writing that aspires to be taken seriously, than quoting C. S. Lewis (in general, much less as an authority). The reasons for this scorn are numerous. Chief among them is Lewis’s ubiquity in American evangelicalism. It’s guilt by association. One doesn’t want to give aid and comfort to them, much less cite one of their treasured masters. But not only that. Often as not, the scholar in question was himself influenced by Lewis at some crucial point in his spiritual and intellectual journey. But now he has put away such childish things; this scholar is a man. And real men don’t quote C. S. Lewis.
You might think I’m exaggerating, but I’m not, at least in many cases. There is an element of spiritual patricide, of self-conscious graduation or expulsion or liberation from the sort of class—economic, cultural, or religious—that would think Lewis was a Serious Thinker on a par with the true leading lights of the twentieth century. And since Lewis was not and never claimed to be a formal, or formally trained, theologian or philosopher, he can justly be ignored or looked down upon as, at best, the ladder one kicks away after climbing up it; or, at worst, a second-rate apologist of the unwashed evangelical masses.
To which I say: what a bunch of bunk. Listening to Lewis these last six months has brought home to me just how silly all those patronizing caricatures of him are. His reputation among the masses is more than earned. His role as an intellectual “friend,” in Stanley Hauerwas’s words, to many a searching teenager and undergraduate, is wholly justified. I’m not going to sing all his praises—for his prose, his economy of thought, his vast erudition, his wit, his bracing moral gaze—nor overlook his shortcomings—on gender, for example, though sometimes he is prescient and insightful, other times he is a man of his time or just plain weird. No, what I want to point out is that Lewis was a “real,” that is to say a bona fide or unqualified, Christian philosopher thinker, an asterisk-free theologian in the classical mold.
Listening to stray paragraphs and casual asides in Lewis’s writing from the 1940s, one realizes that his lack of formal training protected him from every manner of silly fad then dominant in “up to date” theological scholarship. That doesn’t mean he would have had nothing to learn from, say, the late Barth. (I often wonder what Barth would have made of The Screwtape Letters, and what Lewis would have made of CD IV/1.) It just means that sometimes expertise cramps the mind instead of opening it up. Reading broadly in patristic, medieval, reformation, and early modern divines is not all one needs to do to become a theologian, or to think theologically; but it’s not far from the kingdom, either.
One finds in Lewis, for example, a systematically clear and precise presentation of the intrinsic importance and interrelatedness of an extensive array of doctrines: creation ex nihilo, the transcendence and sovereignty of God, the non-being of evil, the non-competitiveness of divine and human agency, the truth of human freedom and moral responsibility, the moral and noetic effects of sin, the status of creation as good but fallen and redeemed in Christ, and the attendant consequences for human knowledge and relation to the divine. One of the things I never realized I gleaned from Lewis before I ever read so-called “real” theology was his devastating critique of every form of scientism. He inoculates his readers against it. So often ruinous to young faith, scientism is seen, with Lewis’s help, for the philosophical sham it is. He is able to do this because he has intuited the scope and rationale of basic Christian doctrine at a deep level and, with the aid of his powers of imaginative but lucid description, reproduced it in prose that hides the enormous learning behind it and is therefore accessible to the average reader. But the latter operation does not attenuate the former fact. Indeed, combining the two is a far more demanding and impressive task than mastering a field but, as a result, being capable of speaking only in one dialect: namely, the dialect of the technical scholar.
I’m well aware that Lewis needs no defense from me. For half a century there has been a veritable publishing industry devoted to extolling his virtues, including his philosophical and theological skills. And there is a laudable freedom from anxiety in true devotees of Lewis: Why should they care whether he receives his due in the halls of power and influence? All true. And a good lesson for this status-anxious holder of an Ivy League doctorate. All the same, it was a happy realization for this lifelong student of Lewis’s to realize no shine came off his works. They’re radiant as ever.
Secular Scruton
After Roger Scruton died last year, I resolved to read back through some of his most important writing on culture, philosophy, and politics. Two things in particular—beyond the usual, and correct, comments on his erudition, intelligence, and lucid prose style—struck me in doing so.
After Roger Scruton died last year, I resolved to read back through some of his most important writing on culture, philosophy, and politics. Two things in particular—beyond the usual, and correct, comments on his erudition, intelligence, and lucid prose style—struck me in doing so. The first is his temperament, or rather his temper. At times Scruton is excruciatingly just in both his tone and his treatment of those with whom he disagrees. This restraint approaches a kind of intellectual chastity: one senses this deep disgust with what I can only call a prurience of the mind, a prurience he resents in thinkers he despises and repudiates in the nations and cultures he loves. This reticence is of a piece with the sort of conservatism he represents and recommends to others.
At the same time, Scruton can also give vent to his hatreds and engage in passionate, even bitter, polemic. Polemic is a venerable rhetorical and argumentative mode, so I don’t mean this observation as a critique per se. Often the ideas and writers he aims his words at very much deserve it. But polemic is not a stable vehicle for fine-grained analysis and charitable understanding, and in Scruton’s work one sees where the polemic has worn down the patience and generosity and sheer mental calm that characterizes so much of his other writing.
The second thing that struck me in reading back through Scruton—and this one surprised me—is how profoundly secular a thinker he is. I was surprised not because I thought Scruton an orthodox Christian but because, given his identity as a conservative and as a happy inheritor of Christian civilization, I anticipated an overall positive posture toward religious faith, practice, and thought. And to be sure, when Scruton is meditating on religious questions, he is eager to take seriously the claims of Jewish, Christian, and Muslim revelation as well as their traditions of reflection. But in his ordinary cultural and political writing, Scruton can be rather harsh toward both faith and theology. In fact, “theology” for him functions as an epithet with which to tarnish his enemies: twentieth-century leftist thinkers (like those in the Frankfurt School) embody an inscrutable and irrefutable “theology” by contrast to rational proposals subject to Enlightenment norms of disputation and argument. Elsewhere he heaps scorn on the concept of original sin, whether in its traditional form or in updated political mutations. Like a Rorty or a Scialabba or any other reputable philosopher from the last two centuries, he can refer offhandedly, presuming the reader’s nodding head, to how the great lights of the eighteenth and nineteenth centuries rendered faith in the supernatural moribund, or at least problematic, for reasonable and educated people. And he follows Kant et al in both rationalizing religion and reducing it to ethics, thereby explicitly making it a matter of private piety rather than public politics. At times there is—to this believer’s eyes—a vaguely sinister noble lie lingering on the edges of Scruton’s account of politics and religion: a Straussian (or Haidtian!) appreciation of religion for the masses while cordoning off its ostensibly inadjudicable and therefore strictly private implications from the rational public deliberations of the liberal nation-state. This streak of (Platonist? Hobbesian? Burkean? Oakeshottian?) toleration or even encouragement (by the few) of widespread false consciousness (in the many) is unbecoming, in my view, though it is native to a certain slice of secular or post-religious intellectual conservatism. Instead of keeping the kernel and tossing the shell, its adherents reverse the operation: keep the forms, they suggest, preserve the outward forms and traditions; but forget the faith at the center. Surely we have seen by now that that move does not work in actual practice. Form and content belong together. Remove one and the other withers and dies.
In any case, reading Scruton was a reminder of this crucial divide within the theory and among the philosophers of conservatism. Scruton has much to teach us on a range of matters, but for Christians, at least, his instruction comes with a certain proviso attached. Irreligiosity is usually associated with the left, but it is all too present on the right, too, only usually less openly hostile and thus more difficult to discern. Finding friends and forming alliances is harder than it seems.
Religious theism or irreligious atheism
Timothy Jackson teaches Christian ethics at Emory University. I was fortunate enough to take a class with him when I earned my MDiv at Candler School of Theology, the Methodist seminary on campus. I’m currently reading his latest book for a review I’ll write later this month; the book is about the Shoah, anti-Semitism, and Christian supersessionism.
Timothy Jackson teaches Christian ethics at Emory University. I was fortunate enough to take a class with him when I earned my MDiv at Candler School of Theology, the Methodist seminary on campus. I’m currently reading his latest book for a review I’ll write later this month; the book is about the Shoah, anti-Semitism, and Christian supersessionism.
Jackson is a prolific academic, and has written about, and in response to, all manner of thinkers and ideas. In 2014 he wrote a response to Ronald Dworkin’s posthumous book Religion Without God in the pages of the Journal of Law and Religion. It’s a perceptive, accessible introduction to Jackson’s generous mind and capacious approach to positions with which he disagrees. His writing is crystal clear, philosophically speaking, and it’s a pleasure to read such forthright Christian claims in a venue like JLR, in consideration of a figure like Dworkin. Here’s a sample:
For my part, I am far less confident that non-subjectivist aesthetics, ethics, and religion can survive without God. Where Dworkin perceives a third alternative, I suspect an either/or: I see no credible via media between irreligious atheism and religious theism. Biblical faith may be false, but, if so, we are left with some form of emotivism, existentialism, or pragmatism. We are consigned, that is, to constructing or inventing or just asserting our own values. Merely willed or fabricated ideals take us far from most Western normative disciplines, as Nietzsche realized. The notion that the beautiful, the good, and the true are objective was, for him, the last implausible vestige of Jewish and Christian theism. (Sometimes Nietzsche indicted Socratic and Platonic philosophy as well.) If the biblical God is dead, or missing, better to be frankly irreligious and to talk in terms of “power” and “fitness.” On this one point, it is hard to argue with the Antichrist.
I suspect that that Nietzsche is correct: Christ—religious theism—and the Antichrist—irreligious atheism—exhaust our options. To side with the former as the truth of our condition is not to say that all artistic, virtuous, or faithful people must be self-conscious Christian or even professing theists. That is manifestly false. But it is to contend that atheism, whether it calls itself “religious” or “irreligious,” is mistaken because “every good and perfect gift is from above, coming down from the Father of lights, with whom there is no variation or shadow due to change” (James 1:17). We may fail to recognize the “Father of lights” and thus may not give Him credit, but without that Father, there would be no lamp even to hide under a bushel. God is omni-relevant, axiologically, even if He is obscure, epistemically.
Go read the rest. There’s a lot more where that came from.
A philosophical introduction
At the close of Jonathan Lear’s introduction to the second edition of his book Freud, he explains what he is and is not aiming to do in the book. Lear is always as sober, clear, and direct as he is here, but it is the confident lucidity of his stated approach to interpreting Freud’s ideas from a particular angle, with particular interests—ignoring any and all matters that would distract from those interests—that is noteworthy here. It should be a model for similar approaches in historical and systematic theology, not least when dealing with events, ideas, and persons as controversial as Freud (both the man and his legacy).
At the close of Jonathan Lear’s introduction to the second edition of his book Freud, he explains what he is and is not aiming to do in the book. Lear is always as sober, clear, and direct as he is here, but it is the confident lucidity of his stated approach to interpreting Freud’s ideas from a particular angle, with particular interests—ignoring any and all matters that would distract from those interests—that is noteworthy here. It should be a model for similar approaches in historical and systematic theology, not least when dealing with events, ideas, and persons as controversial as Freud (both the man and his legacy). Here’s Lear:
It is time to get clear on what I mean by a philosophical introduction. There are already many books that will introduce you to Freud the man, introduce you to the central ideas of psychoanalysis, locate Freud in the history of ideas or offer trenchant criticisms of his views. A philosophical introduction is different. A biographer will want to know what Freud’s life was like and, perhaps, how his ideas arose out of that life. An historian of ideas will want to know the historical context in which these ideas arose, and what influence they had on subsequent thought. A psychoanalytic introduction will aim to explain what the central concepts are, and how they work within psychoanalytic theory and practice. A philosophical introduction, by contrast, will want to show why these ideas matter for addressing philosophical problems that still concern us. Given this aim, there are bound to be aspects of such a book that, from any other perspective, appear strange. The book will pay scant attention to the details of Sigmund Freud’s life. Obviously, one has to be historically sensitive simply to read a book from another time and culture. But the emphasis will always be on why Freud’s ideas continue to have significance, not on how they arose. And Freud may not be the best arbiter of this. Nor is he the final arbiter of what counts as psychoanalysis. There may then be interpretations in this book to which Freud, the man, would have objected. His views are always significant, but psychoanalysis stays alive via a vibrant engagement with them.
That being said, I shall everywhere try to make the best possible case for Freud’s ideas and arguments. This is not because I have a desire to defend Freud, but because if we are going to see how these ideas might continue to matter, we need to see them in their best possible form. Obviously, there are important criticisms to be made of Freud and, more generally, of psychoanalysis. But we have to beware of a certain kind of argument from decadence. So, to give a notorious example, psychoanalysts are sometimes criticized for pulling rank on their patients. If their patients disagree with their interpretation, so the objection goes, then they are ‘resisting.’ No doubt this happens and, humanly speaking, it is awful when it does. But, philosophically speaking, the question is not whether some analysts are bullies. Rather, the question is, ‘When psychoanalysis is practiced well, is there even so a tendency towards bullying?’ Similarly with Freud: there is no doubt that he did not treat the patient he called Dora as well as he should have. Still, one fitting tribute to Dora is to learn from her case as much as we can about the possibilities for human freedom. The aim, then, is not to achieve a balanced historical view of who did what to whom, or who thought what when. Nor is it to make all the criticisms that might legitimately be made. It is to show why these ideas continue to matter insofar as a philosophical understanding of the human soul still matters. And so, when I do offer a criticism, it is because I think that the best possible construal of Freud’s position is still open to criticism and that this criticism is of philosophical significance.
Finally, this is a philosophical introduction. I do not pretend to be able to uncover the hidden philosophical meaning of psychoanalysis; I do mean to engage in a conversation with Freud. My hope is that the book will stimulate others to pursue these thoughts, for I am convinced they are crucial to our self-understanding.
The great cataract of nonsense
Good philosophy must exist, if for no other reason, because bad philosophy needs to be answered. The cool intellect must work not only against cool intellect on the other side, but against the muddy heathen mysticisms which deny intellect altogether. Most of all, perhaps we need intimate knowledge of the past.
Good philosophy must exist, if for no other reason, because bad philosophy needs to be answered. The cool intellect must work not only against cool intellect on the other side, but against the muddy heathen mysticisms which deny intellect altogether. Most of all, perhaps we need intimate knowledge of the past. Not that the past has any magic about it, but because we cannot study the future, and yet need something to set against the present, to remind us that the basic assumptions have been quite different in different periods and that much which seems certain to the uneducated is merely temporary fashion. A man who has lived in many place is not likely to be deceived by the local errors of his native village: the scholar has lived in many times and is therefore in some degree immune form the great cataract of nonsense that pours from the press and the microphone of his own age.
—C. S. Lewis, “Learning in War-Time”
Anthropomorphism and analogy
Andrew Wilson has a lovely little post up using Herman Bavinck's work to show the "unlimited" scope of the Bible's use of anthropomorphism to talk about God. It's a helpful catalogue of the sheer volume and range of scriptural language to describe God and God's action.
Andrew Wilson has a lovely little post up using Herman Bavinck's work to show the "unlimited" scope of the Bible's use of anthropomorphism to talk about God. It's a helpful catalogue of the sheer volume and range of scriptural language to describe God and God's action. It's a useful resource, too, for helping students to grasp the notion that most of our speech about God is metaphorical, all of it is analogical, and none of it is less true for that.
In my experience not only students but philosophers and theologians as well often imagine, argue, or take for granted that doctrine is a kind of improvement on the language of Scripture. The canon then functions as a kind of loose rough draft, however authoritative, upon which metaphysically precise discourse improves, or at least by comparison offers a better approximation of the truth. Sometimes those parts of the canon that are literal or less anthropomorphic are permitted some lexical or semantic control. But in any case the idea is that arriving at non-metaphorical and certainly non-anthropomorphic language is the ideal.
But this is a mistake. Anthropomorphism is not an error or an accommodation to avoid. It's the vehicle of truth, the sanctified means of truthful talk about God. It may in principle speak more truly about God than its contrary. And Scripture's saturation in it would suggest that in fact it is God's chosen manner of communicating with us, and thus a privileged discursive mode for talk about God.
The upshot: theological accounts of analogy and language about God are meant not to sit in judgment on Scripture but rather to show how Scripture's language about God works. It is meant to serve the canon and to ground trust in canonical idiom, not to qualify it. "Given divine transcendence and the character of human language, how is what the Bible says about God true?" is the question to which the doctrine of analogy is an answer. Analogy does not mitigate the truth of Scripture's witness. It is a way of establishing it philosophically.
So that when the Bible says God has a face or arms or nostrils, or has wrath or grief or regret or love, or knows or forgets or begets or weds, the Christian is right to hear it as what it is: the word of God, trustworthy and true.
Louis Dupré on symbolism and ontology in religious language
Religious language must, by its very nature, be symbolic: its referent surpasses the objective universe. Objectivist language is fit only to signify things in a one-dimensional universe. It is incapable of referring to another level of reality, as art, poetry, and religion do.
Religious language must, by its very nature, be symbolic: its referent surpasses the objective universe. Objectivist language is fit only to signify things in a one-dimensional universe. It is incapable of referring to another level of reality, as art, poetry, and religion do. Rather than properly symbolizing, it establishes external analogies between objectively conceived realities. Their relation is allegorical rather than symbolic. A truly symbolic relation must be grounded in Being itself. Nothing exposes our religious impoverishment more directly than the loss of the ontological dimension of language. To overcome this, poets and mystics have removed their language as far as possible from everyday speech.
In premodern traditions, language remained closer to the ontological core which all things share and which intrinsically links them to one another. Symbols thereby participated in the very Being of what they symbolized, as they still do in great poetry. Religious symbols re-presented the divine reality: they actually made the divine present in images and metaphors. The ontological richness of the participatory presence of a truly symbolic system of signification appeared in the original conception of sacraments, rituals, icons, and ecclesiastical hierarchies.
The nominalism of the late Middle Ages resulted in a very different representation of the creature's relation with God. The world no longer appears as a divine expression except in the restricted sense of expressing the divine will. Finite reality becomes separated from its Creator. As a result, creatures have lost not only their intrinsic participation in God's Being but also their ontological communion with one another. Their relation becomes defined by divine decree. Nominalism not only has survived the secularization of modern thought, but has became radicalized in our own cybernetic culture, where symbols are reduced to arbitrary signs in an intramundane web of references, of which each point can be linked to any other point. The advantages of such a system need no proof: the entire scientific and technical functioning of contemporary society depends on it. At the same time, the modern mind's capacity for creating and understanding religious symbols has been severely weakened. Symbols have become man-made, objective signs, serviceable for making any reality part of a system without having to be part of that reality.
Recent theologians have attempted to stem the secular tide. Two of them did so by basically rethinking the relation between nature and grace, the main causes of today's secularism. Henri de Lubac undertook a historical critique of the modern separation of nature and supernatural. Not coincidentally, he also wrote a masterly literary study on religious symbolism before the nominalist revolution. In a number of works Hans Urs von Balthasar developed a theology in which grace, rather than being added to nature as a supernatural accident, constitutes the very depth of the mystery of Being. Being is both immanent and transcendent. Grace consists in its transcendent dimension. Whenever a poet, artist, or philosopher penetrates into the mystery of existence, he or she reveals an aspect of divine grace. Not only theology but also art and poetry, even philosophy, thereby regain a mystical quality, and religion resumes its place at the heart of human reality.
No program of theological renewal can by itself achieve a religious restoration. To be effective a theological vision requires a recognition of the sacred. Is the modern mind still capable of such a recognition? Its fundamental attitude directly conflicts with the conditions necessary for it. First, some kind of moral conversion has become indispensable. The immediate question is not whether we confess a religious faith, or whether we live in conformity with certain religious norms, but whether we are of a disposition to accept any kind of theoretical or practical direction coming from a source other than the mind itself. Such a disposition demands that we be prepared to abandon the conquering, self-sufficient state of mind characteristic of late modernity. I still believe in the necessity of what I wrote at an earlier occasion: "What is needed is a conversion to an attitude in which existing is more than taking, acting more than making, meaning more than function—an attitude in which there is enough leisure for wonder and enough detachment for transcendence. What is needed most of all is an attitude in which transcendence can be recognized again."
—Louis Dupré, Religion and the Rise of Modern Culture (2008), 115-117
A confusing error by John Gray
"The[] Jewish and Greek views of the world are not just divergent but irreconcilably opposed. Yet from its beginnings Christianity has been an attempt to join Athens with Jerusalem. Augustine's Christian Platonism was only the first of many such attempts. Without knowing what they are doing, secular thinkers have continued this vain effort" (29).
From an otherwise admirably lucid and fair-minded thinker, I find this a bizarre claim in a number of ways.
First, Augustine was far from the first to "join" Platonist philosophy with Christian faith. His most prominent predecessor being (I can barely resist saying of course in all caps) Origen of Alexandria, whose influence spread far and wide, east and west.
Second, Gray's presentation suggests that Hellenization and Platonization commenced after Christianity's advent, after its creation as a post-Jewish phenomenon—indeed, apparently only after Constantine. But Ben Sirach and the Wisdom of Solomon and Philo and the apostle Paul and the book of Hebrews all predate both Augustine and Origen; most of them predate the establishment of mainstream Christianity by the end of the first century. Judaism and therefore messianic Nazarene Judaism were thoroughly Hellenized and, at the very least, exposed to Platonist thinking for centuries prior to Augustine, indeed were such at the very source, in the age of Tiberius and Claudius and Nero.
Third, there is no such thing as "the" Jewish or "the" Greek "view of the world." Nor, even if there were, would either be a hermetically sealed whole, in relation to which ideas and practices extrinsic to itself must necessarily be alien intrusions. True, Israel's scriptures are not Platonist. So what? Who is to say what is and what is not complementary between them? Who is to say what modifications or amendments or additions would or would not count as corruption?—as if there ever were a stable essence to one or the other in the first place. It is not as if Origen or Augustine took on Platonism wholesale; they clearly and directly and explicitly reject certain philosophical ideas as inimical and contrary to the catholic faith. That's not syncretism or vain eclecticism. It's Christian theology, well and faithfully done. It might be untrue or imperfectly practiced, but it's not invalid or impossible on principle. How Gray could have come to such a conclusion I haven't the faintest clue.
Scruton, Eagleton, Scialabba, et al—why don't they convert?
Yet it is never entirely clear to me why they themselves are not Christians, or at least theists of one sort or another. In The Meaning of Conservatism Scruton refers vaguely to "those for whom the passing of God from the world is felt as a reality." In his review of Marilynne Robinson's The Givenness of Things, Scialabba remarks that, for neuroscientists, "the metaphysical sense" of the soul is a "blank," and asks further, "wouldn't it be a bit perverse of God to have made His existence seem so implausible from Laplace to Bohr?" (Surely an affirmative answer to this spare hypothetical depends wholly on a shared premise that already presumes against the claims of revelation?) My sense is that Eagleton is something of a principled agnostic perhaps, though I've by no means read either his work or the others' exhaustively. It wouldn't surprise me to learn that Scruton, as a philosopher, has addressed this question head-on. And Scialabba belongs explicitly to a tradition of thought that believes "metaphysics" to have been descredited once and for all.
But why? I mean: What are the concrete reasons why these specific individuals reject the claims of either historic Christianity or classical theism or some other particular religious tradition? Is it theodicy? Is it "science" (but that seems unlikely)? Is it something about the Bible, the exposures of historical criticism perhaps? Is it something about belief in the spiritual or transcendent as such?
I'm genuinely interested. Nothing would be more conducive to mutual learning between believers and nonbelievers, or to theological reflection on the part of Christians, than understanding the actual reasons why such learned and influential thinkers reject the claims of faith, or at least hold them at arm's length.
I suppose the hunch I harbor—which I don't intend pejoratively, but which animates why I ask—is that there do not exist articulable robust moral or philosophical reasons "why not," but only something like Scruton's phrase above: they, and others like them, are "those for whom the passing of God from the world is felt as a reality." But is that enough? If so, why? Given the world's continued recourse to and reliance on faith, and a sufficient number of thoughtful, educated, and scholarly believers (not to mention theologians!) in the secularized West, it seems to me that an account of the "why not" is called for and would be richly productive.
But then, maybe all of them have done just this, and I speak from ignorance of their answers. If so, I readily welcome being put in my place.
Update: A kind reader on Twitter pointed me to this essay by Scialabba: "An Honest Believer," Agni (No. 26, 1988). It's lovely, and gives you a good deal of Scialabba's intellectual and existential wrestling with his loss of Catholic faith in his 20s. I confess I remain, and perhaps forever will be, perplexed by the ubiquitous, apparently self-evident reference to "modern/ity" as a coherent and self-evidently true and good thing to be/embrace; but that is neither here nor there at the moment.
Webster on Barth's engagement with philosophy
—John Webster, Barth, 2nd ed. (New York: T&T Clark, 2000, 2004), p. 174