What humans are

The claim of science and technology to expand the capacity of the human person for life and happiness is basically fraudulent, because technological society is not the least interested in values, still less in persons: it is concerned purely and simply with the functioning of its own processes. Human beings are used merely as means to this end, and the one significant question it asks in their regard is not who they are but how they can be most efficiently used.

~Thomas Merton, OSCO, “The Other Side of Despair”


When I ask a seminar roomful of undergraduate students a question, sometimes they give me sparse and meandering answers. Sometimes they set off like fabulous rockets that lead me to new insights into texts I’ve read thirty or forty times before. And sometimes they don’t say anything.

They sit, reposition themselves, avoid eye contact.

Wait me out.

This spring term, I’ve never been so thrilled with this last option. Why? Because when they refuse my attempts at dialogue, they show me, their fellow students, and themselves that they have a fundamental power that hides in plain sight and makes them more powerful (and more interesting) than any venture capitalist-backed chatbot: they can discern their own desire to answer me and simply refuse to engage. With that simple act, they show themselves to be of a wholly different order from the latest installment of machines and applications that Silicon Valley has decided to throw at the world.

Over the past year the “artificial intelligence” hype has been pretty relentless. (I am reluctant to use the terms “artificial intelligence” or “AI” because it really doesn’t refer to any particular thing and is a hype term used by early computer engineers and now forced on the world by Big Tech promoters. Though “artificial intelligence” refers to all manner of algorithmic computer applications, I focus here on the “generative” large language model-based chatbots like OpenAI’s ChatGPT and Google’s Gemini). Massively powerful companies and spokespeople with outsized influence over what is left of the public sphere spread the hype so well that for the last year it was hard not to hear the letters “ai” in an advertisement, regardless of how vague those ads were regarding what on earth the “ai” on offer was or how the company concerned used it to further their services.

Even as the hype has dissipated somewhat recently, the absurdity of Big Tech in assuming the inevitability of whatever it holds to be important persists. For example, Sam Altman, OpenAI’s CEO, acknowledged back in January at the World Economic Forum that ChatGPT is “extremely limited” and that “ai” had “been somewhat demystified” over the last year. Yet those conclusions didn’t prevent him from saying at another event that “ai” will be used in all kinds of ways and make everyone more productive. (No details on how it would help with what problems were forthcoming.) He also noted in this later interview how massive the energy demands will be once everyone starts integrating “ai” into their lives. No question that this is simply what has to be.

But the arrogance and self-righteousness (and ecological and social nonchalance) of Big Tech are an old song and dance. Yet just as there are the old problems of any new technology proliferating broadly, there are new ones too. Joseph Vukov incisively discussed new problems in the political sphere in the pages of Living City Magazine. His focus there was the threats the new llm-based chatbots pose to “many crucial institutions,” and rightly so. But here I want to draw back a little further from public spaces and institutions to our inner lives and the lives we share in creation prior to our active participation in institutions. (I realize we can look at human life as always-already shaped by institutions, and I agree with that broadly, but hopefully, without succumbing to an idealized “state of nature” stance, we can agree that there are life processes and consciousness that are the organism’s own even as they are socially implicated.) In that vein, I want to draw our attention to the new generative llm-based chatbots as technology that for some people really does trouble the boundary between the human and the machine. The “ai” moment has not just raised urgent questions about our institutional lives and the way we consume media but existential questions about what it is to be alive and to live a human life well.

If a chatbot can provide answers to prompts that many humans couldn’t produce, using language we humans recognize as an intrinsic aspect of what it is to be human, has a boundary been crossed? Are the machines as smart as us now? Are they, in fact, smarter? When Altman recently reflected (13:04-13:25) with characteristic wistfulness on how OpenAI was discovering that “—I don’t believe this literally, but it’s like a spiritual point—” that “intelligence is just this emergent property of matter. It’s like a rule of physics or something,” was he right or naively inviting Tower of Babel and Dr. Faustus comparisons? While some of us may scoff at the notion that a chatbot could possibly be or become a conscious agent, the stated goal of OpenAI is to produce what they call “AGI” (“Artificial General Intelligence”), which they tout as just that. Also, you don’t have to look around the internet (or an undergraduate classroom) too hard to find folks fervent in their belief that chatbots will get there, if they’re not already arriving.

poster image from Ex Machina

These questions are indicators not so much of the future of evolution or of a robot overlord dystopia but rather of the impoverished view many of us now have of those mammals we call humans. Further, the impoverished view many of us now have of the dynamic, irreducible process we call life. The French philosopher Gabriel Marcel saw this view presciently decades ago when he observed that in our technique-obsessed society, “There is a danger of the technical environment becoming for us the pattern of the universe, that is to say, the categories of its particular structure being claimed to be valid for an objective conception of the world” (The Decline of Wisdom 13). Resistance to this impoverished view of life and of the human is a crucial place in which the Catholic intellectual tradition can intervene and provide support for navigating the social, economic, and now virtual worlds in which many of us spend lots of our time. This intervention and support can help those within the Church, but it may also be helpful to those outside the traditional boundaries of the faith. In particular, the Catholic intellectual tradition can articulate what the differences are for those of us who refuse to see chatbots’ chatter and digital images as the “same thing” as what humans do.

I hear students and colleagues say that the text and images produced by these applications don’t have “that human touch” or “just aren’t convincing.” And I agree, but these are also pretty vague ways of describing our reservations. They also leave us open to the “gotcha” moment of development: if we go along with the sentimental or “gut” feeling response to these technologies, and if the chatbots do indeed get so much better at producing responses that they are genuinely indistinguishable from what a human would write or make visually (and that’s a real if), maybe we would then have to admit that indeed the machine has reached human-level intelligence. But we don’t have to accept those amorphous intuitions as the only defense against the demotion of the human in favor of the computer. (Yet the intuition raises a point to which I’ll be returning.)

But first, how did we get to a place where serious people could even question whether humans and computers are actually doing the same thing at all? As cognitive science developed throughout the latter half of the twentieth century, moving away from behaviorism, as a scientific endeavor it had to find ways to quantify behavior and all that the brain does. Those working on “machine learning” also needed ways of thinking about intelligence that were conducive to reproducibility and, frankly, practical and measurable outcomes. And so, over the twentieth century, discourse moved away from “the intellect” and toward “intelligence.” And not only this, but a definition of intelligence that reduced it more or less to “problem solving.” There is also the long tale of thinking of animals and our own bodies as “machines” beginning with Descartes, and the long march of “technique”—the rationalization of processes toward ever greater efficiency—that began in the Enlightenment and got a massive shot in the practical arm from the Industrial Revolution. It is difficult in a world like ours that valorizes efficiency, productivity, and profit and routinely conceptually reduces all processes to mechanisms, to see organisms as vital and spontaneous creatures and the non-utilitarian aspects of our lives as the most important.

These critiques of our collective and individual lives are in some ways generations old at this point. In our present moment what seems most urgent is a coherent way of understanding the human mind and “intelligence” that does not reduce these to problem solving. If “intelligence” simply means the capacity to solve problems through activity (however initiated), then sure, humans, animals, machines, even plants I suppose are “intelligent.” But the Catholic view of the human mind (which is really the collective cognitive powers of the soul) holds the intellectus (the intellect or understanding) as a primary aspect of who we are that distinguishes us from other organisms and, a fortiori, machines. There is a part of the tradition that valorizes the ratio, the reason, above all our other capacities. But even here, the medieval ratio is a more capacious power than the Enlightenment’s instrumental rationality or the modern world’s problem-solving power. There it is a shorthand for the power to discern and to judge in addition to simply “figuring things out” with logic.

The clearest view of the intellectus perhaps, is that it is the cognitive faculty that understands (when we apprehend reality simply), that reasons (when we think discursively about what we understand through apprehension), and that judges (when we discern what is best to do given what we have apprehended and reasoned about). While human reasoning and judging are of course nearly miraculous processes seen even from an evolutionary standpoint, there is such richness and thickness in human cognitive experience before we even get to these activities.

One way to think about this: when you sit under a tree and perceive the world but also understand what it is to sit under that tree and feel the breeze and the sun, to hear the insects around you and the cars driving by across the park, to see your legs crossed before you, to smell the grass, and to rest in that moment of present attention—when you understand that, what problem are you solving? And yet this is a cognitive act of an intelligent organism. And when you decide to lay down for the sheer enjoyment of it, the power to decide that does not have to concern a problem that was to be solved but rather the discerning of an opportunity for enjoyment that did not have to be: that did not make up for a lack of some sort or overcome an obstacle.

This is a simple example of the gratuity in human life, but it is as much a cognitive gratuity as it is a physical or emotional gratuity. It is akin to my students simply not answering my question—a cognitive encounter with reality and a free decision. These are human acts, and they are acts of the human intellectus that then affect the whole organism as well as the world beyond the organism. (And this is not even to get onto the track of how “ai” is the next stage in the great deskilling of modern humans—one that strikes closer to the quick given its proximity to thought and verbal and artistic expression). These acts are not mechanisms. When I prompt ChatGPT, it cannot but respond, even if it produces a string of text that explains that it can’t respond to my prompt. Because it’s a machine; it actually works through mechanisms. We can think of ourselves as machines and as behaving mechanically all we like; we aren’t actually machines and don’t behave mechanically.

I’m not trying to say there can be no benefits to the medium we’re currently calling “ai.” (Though I will also not invoke the platitude that of course there will be great benefits of this technology; I’d be fine if all the “ai” businesses closed up shop and let us all get on with our lives too.) The point is that many of us have not had to think explicitly and deliberately about what it means to be a human, what the salient features of that reality are. But now that we have industry hype-folks able to show off their wares and say that the machine is doing basically what you do in your mind, or doing that better than what you do in your mind, we’re all in need of greater clarity about what indeed does render that account impoverished and even pitiful. There are folks inside the fields of machine learning and computer science doing very good critical work: Emily Bender, Timnit Gebru, Dan McQuillan, Erik Larson, and others who attempt to make clear from within the world of computing how what chatbots do is not in fact what humans do. The most comprehensive work I’ve seen on all this is Nick Couldry’s and Ulises A. Mejias’s The Costs of Connection and Data Grab, from a communication studies perspective that frames our current situation in terms of “data colonialism”—can’t recommend these enough. I’m grateful for all their work.

And the Catholic intellectual tradition can help in this clarifying project too, bringing things to the table that many computer scientists simply won’t lean on, even simply in the area of cognitive philosophy (not crossing over into revealed truths like the human as imago dei etc.). I’m seeing secular essays striving to articulate foundational understandings of the human as a bulwark against the tide of human-reducing “ai” talk too—recently Tyler Austin Harper’s piece in The Atlantic, Shannon Valor’s essay in Noēma, and Marc Watkins’s recent post on “ai” in education all approach the latest wave from Silicon Valley from this angle. As folks who aren’t drawing on the long tradition of western anthropology and western humanism continue to articulate some grounding for the difference between humans and machines, those of us who do not have scruples about drawing on ontologically rich thinking can offer something that I hope will be helpful to the conversation.

I realize many of us will not go read St. Thomas or Boethius—though might I suggest you do?—but there are resources within the reach of educated adults who are non-specialists. The Catholic Encyclopedia has entries on the different cognitive powers, and the new Vatican document Encountering AI (free PDF available online) devotes enlightening chapters to these questions. The Catechism of the Catholic Church explores the unique nature of human beings particularly in sections 355-68 and 1730-89. The works of Josef Pieper are also somewhat more accessible treatments that address our cognitive lives in broader context, especially Leisure: The Basis of Culture (re-issued by Ignatius Press) and Happiness and Contemplation (re-issued by St. Augustine’s Press).

But we could also use more resources that are geared toward our moment specifically. And so, I’ll draw toward a close with a call to Catholic theologians, philosophers, and spiritual teachers and directors to help those “in the pews” and the general public to grasp these realities in clearer ways. While we have a working “Theology of the Body,” we now need a working “Theology of the Living Human Organism, Body+Soul.” Marcel gave us an indication of this remedy for our ailments too. He says, “What I think we need today is to react with our whole strength against that dissociation of life from spirit which a bloodless rationalism has brought about…. Perhaps the most important task on the plane of speculation is to deepen once again the notion of life itself in the light of the highest and most genuine religious thought” (19).

Vukov in his recent essay for Living City concluded calling for a revolutionary refusal. I can’t echo that enough. And I will suggest a similar refusal that pushes beyond the ways we engage with the content of media and drives more toward the ways we walk on the earth more generally. For the growing capitulation to the “ai” hypemen’s claims does not come out of nowhere. We have habituated ourselves to enclosure: enclosure in straight walls, climate control, straight flat roads, clothing, all manner of things that enclose our bodies; enclosure in visual representations of reality that, due to their movement and color, capture our attention in ways that visual art tended not to before electric mass media, in the digital glow that many of us carry around with us now and glances and deep gazing at which we tend not to restrain that enclose our minds. I’m a fan of my house’s dry, warm interior, and I use a computer too. But if we take any comfort and convenience as equally good and fail to discriminate when the ease they provide may do us and others ill, we begin to live a life of less and less autonomy, dependent upon companies and their wares to feel okay for a moment, a day, a lifetime.

We train ourselves to live particular ways whether we intend to or not. You’re doing it right now, and so am I. When we resist, when we refuse, when we deliberately train ourselves to live particular ways, it is called discipline or ascesis. Our culture is in deep need of discipline around our digital infrastructure and devices, and the strangeness of the capitulation to “ai”’s importance and inevitable saturation of our institutions and our lives is to me a dire sign of this fact. While some (many?) are simply out to profit, some (many?) seem genuinely to think these technologies are unproblematic and, more, presage some kind of blurring between life and machine. It’s time or past time to imagine creative ways of not only resisting or refusing participation in massively powerful corporations’ designs for our lives, but also to imagine and cultivate creative ways to foster and embrace life wherever, however we can. It’s not enough to “put the ‘phone’ down.” What wonders await us when we do, and how will we recognize them? We must discern in prayer, in meditation, in contemplation of creation, of ourselves, and of the blessed Trinity to whom we will render an account of our lives. (Folks don’t invoke the Last Day enough, any more.)

If we are to resist the worst—and sometimes the most banal are indeed the worst—tendencies and assumptions of the “ai” moment, we need to educate ourselves on what a human being is, on what ways we are indeed far more “intelligent” than machines of whatever stripe, on all that living creatures are that machines can never be, and then educate our children and reach out to our neighbors to help them articulate what it is that makes them unique, powerful, beautiful—why it is that we can say that we are “wonderfully made” (Ps 139:14). And all this not in sentimental terms but in the most rigorous, which can resist the assault of a reductive and materialist techno-science backed by those looking to continue lining their pockets on the backs of people who are very often at the mercy of totalizing systems that promise freedom at the expense of their full humanity.

Jacob Riyeff

Jacob Riyeff is a translator, poet, and scholar whose work focuses on the western contemplative tradition. His most recent book is an edition of The Poems and Counsels on Prayer and Contemplation of the seventeenth-century Benedictine Dame Gertrude More. Jacob teaches in the English department at Marquette University and lives in Milwaukee, WI.

Previous
Previous

Friday Links

Next
Next

Friday Links