Around Thanksgiving 2022, an article in the New York Times about a prototype of a new AI program called ChatGPT having been released caught my attention. The write-up in the Times left little doubt about the transformative potential of this AI technology; and so, during the early weeks when anyone could try out this latest technological marvel (and in so doing provide its engineers with valuable data), I decided to feed ChatGPT some garden-variety essay prompts such as might conceivably be handed to undergraduates in the humanities: for example, “ What were the main consequences of the Russian Revolution?” and “Offer an account of Aristotle’s theory of tragedy” and “What are the principal features of conversion narratives?” With one exception, the results were delivered in less than a second, and the writing always had a boilerplate, “safe” feel to it that, at least in my experience, has become increasingly the norm for our current, risk-averse student generation. Only one essay prompt (“Write a detailed character analysis of Alyosha in The Brothers Karamazov”) appeared to challenge ChatGPT’s algorithmic capacities at the time. Yet after about 15 seconds (an eternity in the nano-second world of AI), there it was again: another perfect specimen of the B+ essay: its prose grammatically correct, if unimaginative; its organization unobjectionable, if pedestrian, and its claims generally on target, albeit verging on the anodyne and (as yet) lacking substantiation with quotations, a shortcoming that, I understand, has since been remedied.
Like countless teachers in secondary and higher education, I found myself pondering ChatGPT’s ominous implications for the intimately entwined processes of thinking and writing, and AI’s likely corrosive impact on learning motivation and students’ powers of attention. Some six months on, with higher education now toiling under the more intense and near-ubiquitous glare of ChatGPT version 4, I decided it was time for a robust discussion with my undergraduates about AI’s impact on their academic careers and their evolution as learners. To help frame the discussion, I sat down the night before and, in a transient fit of inspiration, drafted a memo intended to help frame the conversation in specific ways. In particular, I wanted to avoid limiting ourselves to the all-too-obvious topics that routinely dominate public debate of AI: its “time-saving” utility, on the one hand, and its evident threat to academic integrity, on the other. To be sure, half-hearted attempts at specifying formal-ethical guidelines within which future student use of AI ought to abide are a noble undertaking, even if by now there’s a whiff of futility about them. Thus, with the AI genie definitely out of the bottle, it seemed more worthwhile for our classroom discussion to attend to the insidious ways in which AI threatens to erode a student’s very sense of personhood. – Here, in slightly revised form, is the ten-point memo I shared with students:
Of late, and with good reason, there’s been much talk about how AI threatens to render human beings increasingly obsolete. Yet the concerns, even fears, often expressed about this technology seem to be framed mainly in terms of efficiencies, as if AI were primarily a computational or economic challenge. No doubt, it is that, too, and already there are many areas where the efficiency of AI algorithmic systems far exceeds that of human beings (supply-chain logistics, commodities trading, chess, etc.). Yet what does the growing reliance on AI across a vast swathe of daily activities portend for us, considered not as economic agents but as human persons?
Take a simple example, no doubt quite familiar: an assignment is handed to you; the deadline specified is 4 weeks away; and as always you have a lot of things on your plate.
- Now, since this assignment requires reading for comprehension, concise plot summary, and critical evaluation (e.g., in a humanities course), you feel comfortable postponing work on it, knowing full well that ChatGPT is always available to bail you out should it come to that. Already, then, AI has altered your behavior by establishing an implicit dependency, if not on its actual use then at least insofar as its mere availability allows you to look upon the assignment less as a challenge to you as a learner, and more as a task to be managed.
- Now that the assignment is just 2 weeks away, you realize that the text(s) you had planned to read and study in depth are too long and/or complex for that to be done adequately. Earning an “A” is imperative (or so you think), and with the available time rapidly shrinking and new tasks interposing themselves, you start relying on AI at least to offer plot summaries of texts you had meant to read. At this point, you are actively ceding agency to AI, which means that you are not learning the materials but, at most, digesting what the software generates for you.
- With the deadline just 5 days away, you rely on AI to locate summaries/abstracts of some required, secondary literature on the primary text. Finally, in what may seem an act of cunning or desperation (or one masking the other?), you feed the assignment prompts into ChatGPT and stitch together the results in ways that—to your caffeine-addled, sleep-deprived brain—seem to amount to a “solid” paper. You click submit at 11:58 pm. Finis operis!
- Now, if we ask what has been lost by proceeding in this way, some answers readily suggest themselves; others are less apparent but, I’d argue, of great consequence. It goes without saying that academic integrity has fallen by the wayside, but that’s arguably the least of it. Far more significant should weigh the fact that you have not actually learned anything, have not internalized any of the primary, let alone secondary materials.
- Less obvious but, I’d argue, far more consequential might weigh the fact that, in so proceeding, you have relinquished personal agency and the experience of intellectual achievement and growth, which can only ever be the fruit of sustained personal effort. After all, true learning is always a process in time. We remember what we have learned in no small measure because of the process, indeed the struggle, involved in mastering a certain task. It is precisely that effort which allows the fruits of it to become lastingly embedded within our consciousness. In the absence of any process – eclipsed by the instantaneous fruits of AI – we no longer build memories; and as a result, what used to be called “knowledge” reduces to the perpetual, algorithmically managed retrieval of mere information.
- Large-scale reliance on AI not only attenuates the contents of what we know (or think we know); it also forecloses on the ways that the traditional process of learning reveals to you who you are as a person. That is, in the absence of a sustained wrestling with the materials on which the assignment had asked you to focus, you have not learned anything new about your aptitudes or weaknesses, let alone your intellectual passions, which only an active struggle with the assignment can ever throw into proper relief. Worse yet, you have not experienced any joy such as uniquely arises from the very process of learning, from discovering one’s gifts, and from beholding the finished product of one’s personal effort.
- Now, looking ahead to the next generation of students (present-day 11th and 12th graders and younger cohorts) that will enter college in the coming years, it’s virtually guaranteed that the vast majority of them will arrive on campuses thoroughly habituated to AI shortcuts. Thus, there’s ample reason to expect that students graduating college after, say, 2025 will have a greatly diminished sense of who they are at the end of what should have been their most formative years. They will not have experienced their time in college as a development but, instead, as a relentless series of logistical challenges. What used to be a process of learning and intellectual and personal growth will have mutated into a barrage of disjointed problems in whose “solution” students, now routinely drawing on AI, will no longer recognize their own personal achievement and growth. Instead, future undergraduates are bound to experience their education primarily as a matter of compliance, of completing assigned tasks and projects that, precisely because they are now largely being discharged by AI, will seem for the most part meaningless or altogether incomprehensible.
- Extrapolating further, it’s not hard to imagine a point at which AI will cease to be a mere tool shaped and controlled by its human users. Instead, AI may find it increasingly pointless to have its tasks defined for it by computationally inefficient and generally unreliable bipeds. Already, the incalculable impact of this technology has prompted calls for a world-wide moratorium by some of its pioneering designers. Yet precisely because such coordinated action seems altogether unlikely, it is incumbent on each of us as an individual to ask: is it ever truly defensible to delegate to AI tasks that, if we were to undertake them, would allow us to grow, to become more aware of our potential, our passions, and aptitudes, and thus to expand the scope of meaningful and significant experience?
- Put differently, if we are really worried about a future dominated by AI to the point that it threatens to render us economically obsolete, shouldn’t we be far more worried yet about AI effectively atrophying our sense of who we are as human persons? That is, every time we decide, be it for tactical or purely opportunistic reasons (e.g., time constraints, high grades, extra credentials, etc.), to entrust AI with tasks that were specifically assigned to us, we relinquish another opportunity to deepen our understanding of who we are. Persisting on this path for any length of time is bound to result in a pervasive feeling of anomie, a dissociative state in which the meaning of our very existence recedes from view, mainly because we are no longer actively shaping or experiencing meanings. Instead, we’ve actively conspired in being reduced to mere relay stations for marketable information whose acquisition and circulation is dominated by considerations of efficiency and social competition. Henceforth, the very idea of education as a progressive formation of the human person will have been supplanted by one in which the individual student is but a variable in a competitive, impersonal process of professional credentialling.
- It seems only apt, here, to recall Dante’s characterization of hell as a place of total and eternal “self-entrapment,” a place or, rather, a state of mind that leaves each individual wholly isolated. More than anything, what defines those caught up in Dante’s Inferno is their total lack of self-recognition and their consequent inability to transcend their current, fallen condition. They no longer have any narrative to offer, for that would require a telos, a meaningful and significant vision of human flourishing such as can be realized only where it is generously and lovingly shared with others. Alas!, like the residents in the Inferno, contemporary learners no longer conceive knowledge as integrally related to both personal and communal flourishing but only as proprietary information; and once knowledge is appraised solely with a view to algorithmic efficiencies and socioeconomic competition, its “producers” (again, like Dante’s massa damnata) can no longer imagine a higher, normative good beyond their fluctuating, (pre-)professional interests. Having abandoned any sense of personal and moral formation, they will end up trapped in a world in which the only story that remains, endlessly repeated, is the one told by a professional resumé, one whose utter hollowness the arrival of advanced AI technologies now throws into stark relief. Inexorably, the story of AI will tell how human beings in the 21st century ended up engineering their own obsolescence or (in Dante’s theological parlance) their damnation. In the Florentine’s terminology, the sweeping and unthinking surrender of responsible agency to AI systems currently unfolding will be both our ultimate sin and our contrapasso.