PROVIDENCE, R.I. [Brown University] — Public discourse around artificial intelligence tends to focus primarily on technical or economic concerns. How capable will AI become? And to what extent might it take away jobs?
While those questions are of obvious importance, other perhaps more fundamental ones are bubbling beneath the surface. What does it mean for a machine to be intelligent? What does generative AI mean for human creativity? Can AI-generated text be considered literature — or even language?
Those are among the questions that students and faculty explored this spring through two Brown University courses offered through the Cogut Institute for the Humanities. The courses, The History of AI and Reading the Large Language Models, aimed to give students a chance to understand and critique advances in AI, while challenging them to contemplate how these technologies relate to humanity writ large. Both were team-taught by a professor from a humanities discipline and one from computer science. The idea was to combine a deep technical understanding of modern AI systems with rich cultural context and criticism.
“AI is absolutely everywhere,” said Ellie Pavlick, an associate professor of computer science and linguistics who co-taught the large language models (LLMs) course. “And it’s this non-human thing trying to co-opt human bits of experience, like language. So we need people equipped to think about it and critique it, and they can't only be computer scientists. We need a variety of perspectives.”

Reading the Large Language Models was a seminar-style course that explored the emergence of human-like text generated by systems like ChatGPT, and what that means for language, literature and culture. John Cayley, a professor of literary arts who co-taught the class with Pavlick, acknowledged an inherent tension between the humanist understanding of language and the emerging abilities of LLMs.
“Thinking and language are not generally understood by most of the people working in the artistic and humanist fields as being computable,” Cayley said. “If they’re right, that means what is computed as language or as text is not actually language. It's something else that we have to incorporate into what we, as human beings, actually do consider to be language.”
Classwork mixed humanist and cultural critique with technical readings and some experimentation with different types of language models. The student composition of the class — eight students from humanities fields and eight with largely computational backgrounds — made for lively discussion and exchange of ideas, says Laura Romig, a Class of 2025 graduate who took the course during her final undergraduate semester. As a double concentrator in comparative literature and applied math, she had a foot in both camps. She said that while the perspectives of the two sides were different, there was broad agreement that both were important.
“Both sides came to more of an understanding of the other side's field and methods of practice,” Romig said. “There was definitely a sense that each perspective was important to the other’s field.”
Romig said she began the class deeply skeptical about the use of AI to either create or analyze literature, and that skepticism remained afterward. But she said she came away with a renewed belief in the power of (human-created) fiction to shed light on the world. As a final project for the class, she and a partner wrote a short story about a world in which people have ceded their decision-making to AI models.
“We wanted to write a work of fiction that showed several aspects of how AI and large language models are affecting the world,” she said. “We came to the conclusion — and this is something that I sort of already believed — that fiction is very powerful for representing the world, changing how people feel about it and showing problems that exist.”
Discourse across disciplines
The History of AI, a lecture-style course with more than 40 students, aimed to provide historical context to what seems like the sudden emergence of generative AI. Course readings spanned Aristotelian philosophy through the early computing work of Ada Lovelace to modern ideas of augmented intelligence and technological singularity. Historical works were combined with fiction by the likes of Karel Čapek, whose work introduced the term robot into the cultural discourse. Readings for each week were organized around themes such as language, intelligence, prediction and embodiment.
The course also included non-Western perspectives on AI development, featuring a lecture by Kate Creasey, a Ph.D. student in history and teaching assistant for the course. Her presentation, titled “AI and Data Sovereignty in the Global South,” spawned an outside-class reading group on the topic and a blog post co-authored by Creasey on the nonprofit news outlet Tech Policy Press.

“Knowing the history of AI invariably changes the way one sees and engages with it,” said Holly Case, a professor of history and humanities who co-taught the course. “In a world where technology often — intentionally or unintentionally — limits a person’s capacity to think rigorously about the ‘invisible’ artificial systems that saturate our lives, the course aims to develop and expand that capacity.”
Suresh Venkatasubramanian, a professor of computer science who co-taught the course with Case, agreed that it’s important to understand modern AI systems in a larger historical context.
“When we talk about whether LLMs can understand language, and what that means for their ‘humanness,’ we should also be understanding how language — the very idea of who is allowed to speak, and what speaking says about thinking — has evolved over the centuries,” Venkatasubramanian said.
While the aim of the courses was not to answer every question surrounding AI over the course of a single semester, faculty and students agreed that starting a discourse across humanist and technical disciplines is critical to working toward those answers.
“I love seeing students engage with AI from a non-technical perspective, and to see technical students confront arguments about AI that aren’t empirical ones,” Pavlick said. “It serves as a valuable reminder that complicated problems require complicated solutions.”