A Conversation with Meghan O’Gieblyn
NO BOOK THAT I’m aware of is quite like Meghan O’Gieblyn’s God, Human, Animal, Machine. Her omnivorous interests range over philosophy of mind, historical accounts of religious disenchantment, and the theological basis of transhumanist ideology, all in the service of analyzing how cultural metaphors for individuality have evolved over the centuries. Human beings have been described as both clocks and computers, and O’Gieblyn performs an examination of the perils in this thinking. Readers never lose sight of O’Gieblyn herself as a personality, even as she brings to bear subjects as diverse as quantum mechanics, Calvinism, and Dostoyevsky’s existentialism. Throughout the book, she is a brilliant interlocutor who presents complex theories, disciplines, arguments, and ideas with seeming ease.
Any review, interview, or profile of O’Gieblyn always references that she was raised as an evangelical and attended the Moody Bible Institute, and I’m not going to break that pattern. Part of this interest is that for many secular readers, for whom the most extreme forms of religion are mainline Protestantism, cafeteria Catholicism, or Reform Judaism, there’s something positively exotic about somebody who spent time enmeshed in fundamentalism. A familiarity with scripture and theology isn’t the source of O’Gieblyn’s talent, but it has provided a useful perspective for recognizing the enduring cultural patterns that others might gloss over. “To leave a religious tradition in the twenty-first century is to experience the trauma of secularization,” O’Gieblyn writes, “a process that spanned several centuries and that most of humanity endured with all the attentiveness of slow boiling toads — in an instant.”
Such a position makes O’Gieblyn a well-suited practitioner of what philosopher Costica Bradatan and I call the “New Religion Journalism” (indeed she is included in our coedited anthology The God Beat: What Journalism Says About Faith and Why It Matters), which reports on issues related to faith beyond the binary of belief and disbelief. Such an orientation was clear in O’Gieblyn’s previous collection Interior States, which explored topics as wide-ranging as Midwestern identity, addiction treatment, and evangelical kitsch culture, as well as the topic of technology, which she takes up more fully in her latest book. God, Human, Animal, Machine isn’t a maturation of Interior States — that collection was already fully grown. But it does provide an opportunity to see a singular mind work through some of the most complex problems in metaphysics, and to leave some room for agnosticism.
In its depth, breadth, and facility for complicated philosophical conjecture, O’Gieblyn’s writing calls to mind those countercultural doorstoppers of a generation ago, like Douglas Hofstadter’s Gödel, Escher, Bach: An Eternal Golden Braid or Gary Zukav’s The Dancing Wu Li Masters: An Overview of the New Physics, though stripped of all the cringe-y countercultural affects, and interjected with a skeptical perspective that displays deep humility. “I am a personal writer, though it’s an identity I’ve always felt conflicted about,” O’Gieblyn writes in God, Human, Animal, Machine, but it’s to her readers’ benefit that she’s grappled with this discomfort, because the resulting book is nothing less than an account of not just how the mind interacts with the world, but how we can begin to ask that question in the first place. I was fortunate enough to have a conversation with O’Gieblyn through email over several days in early autumn of 2021.
¤
ED SIMON: Much of God, Human, Animal, Machine deals with the contested historical hypothesis that modernity is a long account of disenchantment. How useful do you still find that concept to be? Is there a model to move beyond contemporary secularity? Is re-enchantment or neo-enchantment even possible? Would we want those things even if they were?
MEGHAN O’GIEBLYN: I’m interested in how disenchantment narratives function as modern mythology — a kind of origin story of how we came to occupy the present, or as an explanation (maybe even a theodicy) of what went wrong with modernity, which usually has something to do with the dominance of science and technology. We often think about the longing for re-enchantment as an eschewal of science and reason, which is to say a form of nostalgia or regression. What I end up exploring in the book, though, is how science and technology are often drawn into the project of re-enchantment. In the philosophy of mind, there’s been a lot of enthusiasm lately for panpsychism — the idea that all matter is conscious — which was for a long time on the fringe of consciousness studies. Or you could look to the rise of social AI, like Alexa or Siri, and the pervasiveness of smart technologies. The fact that we’re increasingly interacting socially with inanimate objects recalls an animist cosmology where ordinary tools are inhabited by spirits and humans maintain reciprocal relationships with them.
I think all of us are exhausted by anthropocentrism. It’s nice to envision a world where we aren’t the only conscious beings lording over a world of dead matter. And there’s a very simplistic critique of the disenchantment thesis that argues that science and technology are just as awe-inspiring as the spiritual doctrines they’ve displaced. Bruno Latour said something along these lines, in the early ’90s: “Is Boyle’s air pump any less strange than the Arapesh spirit houses?” But the trauma of disenchantment isn’t just the lack of magic or wonder in the world. What’s so destabilizing about disenchantment — and I say this as someone who experienced it very acutely in my own deconversion — is the fact that the world, without a religious framework, is devoid of intrinsic purpose and meaning. And that’s something that can’t (or rather shouldn’t) be addressed by technical and scientific disciplines.
One of the things that struck me, especially with your analysis of metaphor and the way that it’s shifted over time, is how images used by the innovators of cybernetics two generations ago have become literalized in the thinking of many tech enthusiasts. This language whereby the brain is “hardware,” and the mind is “software” moving from a lyrical shorthand into an almost de facto assumption, and how that literalization has led to Silicon Valley utopian enthusiasm. You talk a lot in the book about how transhumanism reinscribes theological thinking into a secular framework. Could you talk a bit about what exactly transhumanism is, and its approach to disenchantment? And does it proffer any meaning or purpose, or is it basically offering a more sophisticated version of Siri and Alexa?
When the brain-computer metaphor first emerged in the 1940s, its purpose was to move past the haunted legacy of Cartesian dualism, which conflated the mind with the immaterial soul, and to describe the brain as a machine that operates according to the laws of classical physics — something that could be studied in a lab. Over time, this metaphor became more elaborate, with the mind being “software” to the brain’s “hardware,” and it also became more literal. The mind wasn’t “like” software, it really was software. The irony is that the metaphor reinscribed dualism. Software is information, which is just immaterial patterns, abstraction — like the soul. Around the ’80s and ’90s, some Silicon Valley types started speculating that, well, if we can transfer software from one machine to another, can’t we also transport our minds? This has become a preoccupation within transhumanism, which is a utopian ideology centered around the belief that we can use technology to further our evolution into another species of technological beings — what they call “posthumans.”
One of the scenarios transhumanists often discuss is digital resurrection — the idea that we can upload our minds to supercomputers, or to the cloud, so that we can live forever. An ongoing question is whether the pattern of consciousness can persist apart from a specific body (whether it’s “substrate independent”) or whether the body is crucial to identity. This is precisely the debate that the church fathers were having in the third and fourth centuries. There was the Greek view that the afterlife could be purely spiritual and disembodied, and then there were those, like Tertullian of Carthage, who argued that resurrection had to reconstruct the entire, original body. So the brain-computer metaphor, which started as a disenchantment effort, ended up reviving all these very old, essentially theological, questions about immortality and eternal life. And these projects are now being actively pursued — Elon Musk’s Neuralink is one recent example. I don’t know, though, whether this is offering people the sense of meaning or narrative purpose that can be found in the Christian prophecies. It seems to be speaking to a more elemental fear of death.
In God, Human, Animal, Machine, you pose a convincing argument that so much of this kind of transhumanist thought is a kind of secularized eschatology, sublimated religious yearning dressed up in a materialist dress that isn’t quite as materialist as it thinks it is. Were these sorts of connections immediately obvious to you as somebody who’d been an evangelical Christian and gone through your own process of disenchantment? And how do people you’ve talked to who hold some sort of transhumanist belief react to the observation that theirs is a kind of faith in different form?
No, the religious dimension of transhumanism was not apparent to me when I first encountered it, which was a few years after I left Bible school and started identifying as an atheist. It’s clear to me now that I was attracted to these narratives because they were offering the same promises I’d recently abandoned — not merely immortality, but a belief in a future, in the culmination of history. Maybe I was still too close to Christianity to sense the resonance. But it’s also true that people who subscribe to these techno-utopian ideologies tend to be militant atheists — or they were, at least, in the early 2000s. Nick Bostrom, in his history of transhumanism, acknowledges that the movement shares some superficial similarities with religious traditions, but he emphasizes that transhumanism is based on reason and science. It’s not appealing to divine authority. The technologies needed for digital immortality and resurrection are speculative, but theoretically plausible — they don’t require anything supernatural. Bostrom attributes the movement’s origins primarily to the Renaissance humanism of Francis Bacon and Giovanni Pico della Mirandola.
It wasn’t until years after my initial exposure to transhumanism that I became interested in the possible religious origins of these ideas. The first use of the word “transhuman” in English appears in an early translation of Dante’s Paradisio, and it’s in a passage describing the transformation of the resurrected body. Once I started doing research, it became clear that the intellectual lineage of transhumanism could be traced back to Christian thinkers who believed technology would fulfill biblical prophecies. This includes Nikolai Fyodorov, a 19th-century Russian Orthodox philosopher, who taught that the resurrection would be enacted through scientific advances. It includes Pierre Teilhard de Chardin, a French Jesuit who predicted in the 1950s that global communication networks would eventually succeed in merging human consciousness with the divine mind, fulfilling the parousia, or the Second Coming. He called it “the Omega Point,” which is a precursor to what’s now known as the singularity.
It’s fascinating that Bostrom identified Pico della Mirandola with transhumanism, because though the Florentine philosopher wasn’t an orthodox Christian, he was profoundly indebted to all kinds of hermetic, occult, kabbalistic, and Platonist ideas that seem so clearly theological in nature, even if they’d have been considered heretical. Critic James Simpson describes something which he calls “cultural etymology,” and God, Human, Animal, Machine seemed very much in that tradition to me. How much of your cultural criticism do you see as being an act of excavating these deep histories, the ways in which we’re still influenced by what we’ve been told are inert ideologies? And is there any privileged position where we can actually move beyond them — would we even want to?
I’ve never come across the term “cultural etymology,” but that’s a great way to describe what the book is doing. I have a somewhat obsessive curiosity about the lineage of ideas, which might be the result of being taught, for most of my early life, that truth came from God, ex nihilo. When I dropped out of Bible school, I ended up reading a lot of secular scholarship about Christianity, especially Christian fundamentalism. It was fascinating to see how these ideas that had once seemed unquestionable were the product of human thought, and how they had emerged, in some cases, from unexpected places. The book is an attempt to do the same thing with technological narratives; I’m trying to uncover where these problems and assumptions came from, in the first place, and how they intersect with older traditions of thought.
As for whether we can move beyond these old ideologies, I don’t know whether we can or should. I wasn’t really thinking about that question during the writing process. But, I suppose, what worries me most is the possibility of these quasi-religious impulses being exploited by multinational tech corporations. These companies are already fond of idealistic mission statements and expansive rhetoric about making the world better. A few years ago, a former Google engineer, Anthony Levandowski, announced that he was starting a church devoted to “the realization, acceptance, and worship of a Godhead based on Artificial Intelligence.” It was widely regarded as a publicity stunt, but it’s not impossible to imagine a future in which these corporations develop explicitly spiritual aspirations.
One of the things that struck me when reading your book was how much of the early days of computer technology had this countercultural impulse, a very California in the ’60s, Whole Earth Catalog kind of vibe, where quasi-occult thinking didn’t necessarily have the same dystopian feel that it does in Silicon Valley right now. A benefit to the cultural etymology that you do is that it lets us see what’s insidious about something like Levandowski’s stunt, where the theological impulse of some of that is married to Mammon, as it were. In God, Human, Animal, Machine, you recount how you’re inevitably queried about what metaphors might be more sustaining than these technological ones. Is there a better, or more hopeful, or maybe just more helpful set of metaphors that we could embrace? And if so, how do we even get there?
That’s a great question. It’s funny you ask it, in fact, because just yesterday I was listening to a podcast interview with a Buddhist scholar who was talking about consciousness. He argued that the mind is not actually in the brain; it belongs to a primal flow of consciousness. Then he went on to compare this primal flow to a hard drive. The brain was like the computer keyboard, he said, and the mind, this flow of consciousness, is the underlying substrate, or hard drive. I mention this just to point out how pervasive these technological metaphors are. Even people who are critiquing the reductive framework it supports, or offering a wildly different explanation of consciousness, still draw on computational imagery.
We’re inevitably going to move beyond the computer metaphor at some point, when new technologies emerge, or when there is some larger paradigm shift in theories of mind. People have already proposed new metaphors, though most are just based on other technologies (the brain functions like blockchain, or a quantum computer, etc.). I don’t think we’re ever going to slough off, entirely, our need for metaphors, particularly when it comes to those mysteries that seem to push up against the parameters of the scientific method — which is to say, consciousness and some of the problems in quantum physics, like the observer effect. Niels Bohr once said that spiritual truths are often conveyed in parables and koans because they are fundamentally beyond human understanding, and these metaphors are a way to translate them into a framework that our minds can comprehend. I think the same is true of scientific mysteries; we need these images and analogies, which can be helpful so long as they are recognized as mental tools. When religious metaphors are taken literally, it becomes fundamentalism. And fundamentalism can creep into science and philosophy as well.
As a writer, I wonder if you could speak to the utility of literary or compositional metaphors for consciousness. Or maybe, even more broadly, about the ways in which writing is a mirror of the mind. I’ve always imbibed this sentiment that something like the novel or the essay is a way of replicating consciousness in a manner that nothing else is able to do. I am thinking about Joyce in Ulysses, or Woolf in Mrs. Dalloway, or Flaubert, or James, and so on. When you consider Nagel’s famous philosophy of mind essay asking what it would be like to be a bat, I sometimes joke that if it could write a novel, we’d know. Especially as a writer who very firmly embraces the power of the first person, what does literature have to tell us about theories of mind that philosophy maybe can’t?
I’d love to read that novel authored by a bat! It was difficult to avoid thinking about these kinds of self-referential questions during the writing process. At some point, the irony struck me that I was writing a book in first person that was about the slippery and potentially illusory nature of the self. Being a writer certainly gives you a strange glimpse into the functions of consciousness. On one hand, as a personal writer, I’m acutely aware that the self is a construct, that I’m deliberately crafting a voice and a persona on the page. So I’m intuitively sympathetic to those philosophers who point out that the continuous self is a kind of mirage. On the other hand, I’ve always felt that writing is the objectification of my mind, or maybe even proof of its existence. When you are repeatedly transmuting your thoughts into material substance (essays, books), it’s very difficult to believe that the mind is an illusion. Maybe all of us writers are just trying to externalize our minds, or to make concrete our ineffable interior experience — in which case, the writerly impulse might not be so different from the desire to upload one’s consciousness to a computer, which is another way to export the self or solidify the elusive pattern of the mind.
I like what you said about how novels and essays are able to replicate consciousness unlike other mediums. Part of the reason I was initially drawn to writing is because I was eager to live in someone else’s head, to see how other people see the world. What’s interesting is that there’s now natural language processing software that can write short stories, poems, and personal essays. It’s only a matter of time before we have novels written by algorithms. I wonder how that will change the way we think about our own minds and the intimate transaction between writer and reader. If a machine can convincingly simulate consciousness in writing (and they are getting very close), what does that say about the origins of the words we ourselves are putting down? What does it say about the reality of our own minds?
I’ve always been fascinated by the idea of AI-written literature. A few years ago, I wrote an essay for Berfrois in which I close read a poem that was generated by a bot. I always go back and forth on this; I personally see writing as an embodiment of our minds, and I absolutely agree with you that, for writers, at least, the process almost is equivalent with thinking. But then I’ve got this enthusiasm for a very hard, formalist, old-fashioned New Critical pose where it’s all about the words on the page, and the author is almost incidental. I guess it’s almost like literary panpsychism — the text, or the algorithm that generated it, is more conscious than me! If I can ask you to play the role of prognosticator, do you think sophisticated AI-generated novels are on the horizon? How will writers, critics, readers respond? What would that look like?
Yes, I’m very much familiar with that feeling that the text is more conscious than I am — or that it has its own agenda, its own goals. One of the chapters of the book is about emergent phenomena — the fact that, in complex systems, new features can emerge autonomously that were not explicitly designed. An algorithm built to write poems spontaneously learns how to do math, for example. Books are complex systems, too, so the person who’s building them, the writer, can’t always foresee the ripple effects of their choices. There’s no way you can anticipate everything you’re going to say, word for word. What you write often surprises you. It’s a very concrete, technical phenomenon when you think about it that way. But in the moment, it feels almost mystical, like the writing is evolving its own intelligence, or that something else is speaking through you.
As far as the prospect of AI novels — it’s hard to say. The short fiction I’ve read by GPT-3, one of the most sophisticated programs of this sort, approaches the threshold of bad MFA fiction, which is to say it’s proficient but very formulaic. I’m instinctively skeptical that a machine will ever produce a literary masterpiece that simulates the on-the-page consciousness of someone like Joyce or Woolf. But then again, few human novelists these days are trying to do that, either. Many contemporary writers are content to describe characters externally, forgoing interiority entirely. If an algorithm does eventually manage to write a novel that passes for human-authored, it might say more about our impoverished expectations for literary consciousness than it does about the creative capacities of these machines.
One of the things that I’m curious about with emergent systems, especially when we think about artificial intelligence, are the unexpected ways in which the internet and social media have altered the way we perceive the world. When you write that “[a]ll of us are anxious and overworked. We are alienated from one another, and the days are long and filled with many lonely hours,” I was reading it in the context of your other observation about how sometimes on a platform like Twitter it’s easy to begin to feel less like an individual, and more like a node in some sort of omni-mind, and all that’s alienating about it. You address this dystopian aspect of technology so well, analyzing not just human-machine interaction, but surveillance capitalism as well. You write that for the techno-utopians of Silicon Valley, they treat this system almost as a God — but unfortunately it’s Calvin’s God in algorithm. In what sense do you think that something like the internet is conscious? Where do we go from here?
Some philosophers have argued that the internet might be conscious, or that we can, for practical purposes, treat it that way, regarding political movements and viral sensations as emergent features of this larger intelligence. This obviously bears a lot of similarities to the idea, found in many mystical traditions, that the individual ego can meld into some larger substance — God, Brahman, the universe as a whole. But for the user, it doesn’t feel especially transcendent. In those passages about alienation, I was trying to describe how people I know and love become unrecognizable when they start speaking the language of social platforms. When I’m heavily immersed in those spaces, I sometimes become unrecognizable to myself.
This is the way that algorithms see us — collectively, as a single organism. And some people in the tech world have argued that this algorithmic perspective, which can better perceive the world at scale, is truer than anything we can see on the ground. One of the more unsettling trends I discuss in the book is this very illiberal strain of rhetoric that emerged alongside the deep learning revolution, one that argued that we should just submit to these algorithmic predictions without trying to understand why or how they draw their conclusions. Because many of them are black box technologies, they demand that we take their output blindly, on faith. Many tech critics have drawn on theological metaphors, arguing that the algorithms are “godlike” or “omniscient.”
The algorithms, of course, are not yet truly omniscient. They are deeply flawed. But I’m interested in looking down the road a bit. If we do someday build technological superintelligence, or AGI, then what? Once knowledge becomes detached from theory and scientific inquiry, we’re going to have to decide what to keep from the humanist tradition. Is knowledge the ultimate goal, such that we’re willing to obtain it at any cost, even if it’s handed down, as revelation, from unconscious technologies we cannot understand? Or are we going to insist on the value and autonomy of human thought, with all its limitations? For me, the latter is far more valuable. Knowledge cannot mean anything to us unless it’s forged in a human context and understood from a human vantage point.
¤
Ed Simon is a staff writer at The Millions.