The Devil On Our Shoulders
- 3 days ago
- 6 min read
By George Porteous

According to the librarian who first inducted me as a visiting student at Oxford, the Radcliffe Camera houses approximately 600,000 volumes. I considered that figure two months later while sitting in the library’s upper gallery. It was a Sunday night and the word processor on my laptop showed only a blank page, the cursor blinking dully back at me. I was expected to turn in an essay the next day on neoliberalism and technological change, but felt incapable of writing even a simple sentence.
Looking up at the hundreds of books along the walls, I marveled at the sheer number of people who, for centuries, had apparently triumphed at this basic task which confounded me. Turning back to the laptop, my eyes fell on the application icon for ChatGPT. I knew that if I clicked on its circular design, I could accelerate a final product with only a few prompts.
My mind began searching for an appropriate justification. Under the weight of so many great books, I thought, my undergraduate sentences would never measure up. Why bother? Or, I considered with equal bravado, what if the old ways of essay writing were dying out altogether? Maybe the Silicon Valley evangelists were right, and it would be foolish not to embrace this defining technology of the future. I wouldn’t have to copy full sentences, either. A simple outline would be enough to jump-start my work.
The moment passed. I shut the laptop, pulled on my coat, and hurried down the stairs to leave. Standing in the night air, likely overcaffeinated, I muttered a lecture to myself. Hadn’t I traveled from Stanford to Oxford expecting a challenge? The very point of enrolling in a history tutorial with weekly essays had been to discover my blind spots as a writer and overcome them. Using AI would simply mask those weaknesses, depriving me of the slow, deliberate thought I hoped to practice. What’s more, I had built real trust with my tutor. I couldn’t stomach deceiving him.
After that night, using AI to help with my essays crossed my mind only rarely. I had worked out my ethical reservations, and the technology’s attraction waned, even if it hadn’t completely dissipated. Still, I wondered how many of my fellow students felt differently.
The answer to that question is slowly becoming clear. Emerging reports suggest that in the two years since they became publicly available, generative AI tools have significantly altered students’ habits. In a survey of 1,000 UK undergraduates from February 2025, approximately 88% said they had used generative AI for their assessments, an ‘explosive increase’ from 53% in 2024. In the United States, that number stood at around 85%. Those behaviors could range from brainstorming to outlining or drafting full sentences.
More anecdotally, I’ve noticed that references to academic assistance from ‘Chat,’ as the tool is affectionately known, are growing less taboo and more common. The most striking use I’ve seen firsthand came in a seminar last year. Sitting next to a classmate, I remember glancing at his laptop screen to see that he was typing the discussion question into ChatGPT, only to raise his hand and repeat its answer verbatim.
If these changes are to be taken at face value, they pose an obvious set of ethical and educational dilemmas. Critics have rightly sounded the alarm over students’ potential cognitive decline. Their arguments draw strength from a study at MIT Media Lab which found that when writing an essay, ChatGPT users ‘consistently underperformed at neural, linguistic, and behavioral levels,’ showing decreased cognitive engagement.
The consequences of such decline could extend well beyond students’ immediate learning. The traditional rationale for the academic essay was never only to teach content, but to instill a habit of formulating and articulating ideas through language. Those instincts seem to form the bedrock of reason and democratic participation. How will students acquire them if they never have to muscle through the age-old five-paragraph essay?
In this vein, philosopher Anastasia Berg recently wrote for The New York Times Opinion section that students left ‘to the devices of A.I. companies’ could enter the world lacking’ the means to understand the world they live in or navigate it effectively.’
By now, these points have echoed widely enough to form a consensus regarding the problem. But in the daily fray of higher education, solutions remain elusive or inconsistently applied.
Syllabus statements provide the clearest guidance on AI use that most students directly receive. ‘The use of generative AI is prohibited in this course,’ states one of the more straightforward policies I’ve encountered at Stanford, for a class on the history of American slavery. ‘Its use constitutes academic dishonesty.’ Another history syllabus reasons that ‘The use of or consultation with generative AI will be treated analogously to assistance from another person.’
For many students, particularly in the humanities, these statements create strong ethical and psychological guardrails against plagiarism. They seem to outline fairly straightforward limits while clarifying instructors’ expectations.
Yet, these kinds of ethical barriers are only as strong as individuals’ acceptance of them. For every student who abides by syllabus policies on AI use, too many others will scroll past them. Awareness of that reality drives professors and teaching assistants into a distrustful stance when grading, eroding classroom relationships.
While software aimed at detecting AI plagiarism might seem an appropriate answer, it lacks the technological sophistication to yield fully accurate or unbiased results. Its adoption is also more likely to escalate distrust between students and teachers than defuse it. Professors are not police officers; asking them to run every paper submission they receive through an AI check would be demeaning to both them and their students.
Strengthening the integrity and educational focus of university assessments in the age of AI will instead require that universities look beyond punitive logic. How might students, instructors, and administrators reshape conditions to render plagiarism less attractive in the first place?
In my limited undergraduate experience, tech-free classrooms and assigned paper readings are both effective steps. Not only do they make AI-based cheating harder, but they seem to produce more engaged classrooms.
More professors are also re-emphasizing in-person hand-written or even oral exams. This is one instance where U.S. universities, including Stanford, could learn from Oxford. Possibly owing to its medieval inertia, Britain’s oldest university has maintained hand-written exams as the norm.
Classroom reforms would likely be more effective and widespread if universities put their full institutional weight behind them. Unlike faculty members, administrators are fairly insulated from the personal dimension of teaching. That distance could help them broach potentially defensive conversations about plagiarism with greater frankness, even if they enjoy less rapport. Clear endorsements of classroom practices limiting AI access would give faculty the security to move forward with them.
Through their links with the tech industry and the products they purchase for community members, university leaders also hold some sway over students’ use. Their alignment or distance from certain technologies inevitably shapes the educational climate around them, however subtly.
How, then, have the world’s leading universities responded to generative AI?
Oxford’s official guidance on AI use for students points to a few contradictions. The document first notes that ‘AI tools cannot replace human critical thinking or the development of scholarly evidence-based arguments and subject knowledge that forms the basis of your university education.’ Echoing the punitive approach, the policy adds that ‘Unauthorised use of AI falls under the plagiarism regulations and would be subject to academic penalties in summative assessments.’
Toggle to a different tab of the same web page, and you’ll find instructions on how to access a free ChatGPT account ‘through the University’s collaboration with OpenAI.’ Stanford offers similar access to community members through its ‘AI Playground.’
It is understandable that university leaders would feel compelled to establish these partnerships. A growing discourse has framed AI tools as the future of the world economy, positioning them as essential for students to master early on. Many students would likely echo those arguments. Besides, AI could accelerate vital advances in scientific research and technology. Why deprive talented undergraduates of opportunities to learn or experiment?
These are fair considerations, revealing the limits of a perspective limited to the humanities. In cases where students meaningfully contribute to research, and in some STEM courses, AI tools have a role to play.
Still, broad access to ChatGPT combined with a lack of clear bounds on its use signals a lack of commitment to defending students against AI companies’ cognitive encroachment. In an era of pronounced democratic decline, the primary focus of major universities with an educational mission ought to be safeguarding the liberal arts and developing students’ capacity for thought, regardless of their discipline.
Undergraduate education can remain a haven apart from the breakneck technological changes roiling the wider world. To sit in a moonlit library laboring over sentences is not a burden but a privilege. We ought to keep reminding each other of that truth.
Art by Sisely DeLisi

Comments