Date August 27, 2025

Q&A with Michael Littman: What’s on the mind of Brown’s first associate provost for AI

Littman is working with colleagues to develop guidance for using AI in the classroom, looking for new opportunities in AI-enabled research and identifying how AI might help the University run even more effectively.

PROVIDENCE, R.I. [Brown University] — As chatbots and other artificial intelligence technologies promise to forever alter the way people write, learn and work, universities nationwide are working to keep pace.

This July, Michael Littman, a professor of computer science at Brown University, started a new role on campus as the University’s first associate provost for artificial intelligence. Littman’s charge includes supporting AI-related research, expanding opportunities for students to engage with AI across a diverse array of disciplines and advising operational units on AI use and working with external entities to maximize the impact of Brown's AI research.

“I think Brown did a remarkably interesting thing in having someone keeping track of all of these different areas,” Littman said. “Other institutions have hired people to look after one or two of these areas, but I don't know of any others that are unifying all of them in a single role. It’s daunting, but I think it’s a good approach to look at things across the board like this.”

Littman brings substantial experience to the job. In addition to studying machine learning and AI for his entire decorated research and teaching career, he recently completed a three-year rotation as division director for information and intelligent systems at the National Science Foundation, where he oversaw an annual budget of $200 million in research funding in AI-related areas. In 2021, he chaired the One Hundred Year Study on Artificial Intelligence, a multidisciplinary, twice-per-decade study on the state of AI development. 

After starting in his new role on July 1, Littman discussed his work and vision in an interview. 

Q: You have a lot on your plate as you begin. What stands out as a priority area early in your tenure?

The use of AI in the classroom is an area that stands out. So far, I’m seeing a range of reactions across the University from people who are going all-in and being extremely creative with AI in their teaching, and people who really hope this all just goes away — which I don’t think is an option for any of us. We want to get input from faculty, staff and students on this, so we’ve formed a committee, which started meeting before I even began officially in my role. We issued some preliminary guidance last week, but we're targeting a more formal document by the end of the calendar year.

Q: What has the committee’s work looked like so far? 

One of the things we’ve done is to assign everybody a book to read, because that’s what academics do when we need to get the lay of the land. That book is called “Teaching with AI,” and I really liked it. One of the big takeaways is that this technology is out there, and it’s really hard for students to not use it. It’s also really hard for us as educators to enforce not using it. So the question becomes: How do we incorporate it? The perspective the book takes is that if AI is going to produce C-plus or B-minus work on just about any assignment, then maybe we can’t give a B-minus for that level of work anymore. Maybe a B-minus is the new F. So now we have all this headroom above that where the student and the AI can work together to produce something better. I thought that was an interesting perspective, and it’s the kind of thing we’ll be working through with this committee.

Q: That’s thinking from the standpoint of an educator, but what would you say to students — some of whom may be just starting their college careers — about how they should be thinking about AI?

I’d say there's a lot we don't know yet, but I think you can make the case that mindless use of these tools produces mindlessness. It can interfere with the learning process. If you are using these tools and not really reflecting on the subject matter, it’s going to block you from developing as a student and as a contributor to society. I think we have to establish norms and expectations around that. Students need to know that they're harming themselves by letting these tools do all the work. I think that has always been the case with cheating of any kind, and there’s a long tradition of professors explaining that cheating harms students’ development. 

Q: What about AI in research? What kinds of things are you thinking about there?

There are two broad categories here. There's research on AI specifically — how to make it work better, how to use it responsibly, etc. Then there's research that’s not on AI per se, but that AI can help to support. 

In the first category, Brown has had a strong AI research program for many years, not just in computer science, but also in engineering, applied math and neuroscience. Brown has really gotten in on the ground floor with some great AI work very early on, and now we have things like the Center for Technological Responsibility, Reimagination and Redesign that’s doing great work, and we’ll continue to build there. 

But what we’re really seeing now — and this is something I was exposed to quite a bit in my work at the National Science Foundation — is AI being used across different research areas. Take particle physics, for example. They smash particles together in an accelerator and get unbelievable amounts of data — more than we can store per unit time. So people are using AI to come up with ways of only keeping the stuff that’s scientifically interesting. That’s one example. Basically, AI enables people to do science more effectively. And I think that's happening across the sciences, and it’s a place where Brown has been active. So we need people to be educated about what the possibilities are in AI, how it can be used, what kind of questions it can answer and what it can't answer. 

“ A lot of people ask me whether they should be concerned about the impact of AI on this or that. My standard answer is "it's not as bad as you think.” Which is to say, some of it is bad, but not in the extreme form you hear in the press from time to time. ”

Michael Littman Associate Provost for AI

Q: How about the operations side of the University?

People don’t often think about it this way, but Brown is basically a little city. We’ve got professors and students, but there are also chefs, police officers, accountants, booksellers and all sorts of other roles. We’re going to look at how AI might help us be more efficient in the things that we have to do. There’s a ton of paperwork, for example, that might be automated or made easier in some way. We’re going to work with the people who do these jobs and see what makes sense.

We're also excited about the possibility of having AI help make it easier for students and faculty to navigate the information technology systems that we have on campus. These are things like Canvas, Ask or Banner. We want to integrate them better and make them less painful to use.  That’s something I’m particularly excited about. 

Q: Anything else you’d like to add? Maybe some thoughts for people who are nervous about how AI may change the way they do their jobs?

A lot of people ask me whether they should be concerned about the impact of AI on this or that. My standard answer is "it's not as bad as you think.” Which is to say, some of it is bad, but not in the extreme form you hear in the press from time to time. In terms of impact on jobs, yeah, we're already starting to see some changes like slowdowns in hiring for entry-level jobs. It seems AI chatbots are particularly well-suited to do the kind of work companies expect from new hires. What that means is that institutions like Brown need to find a way to help their graduates leapfrog over these entry-level positions and start contributing at a higher level. I think we're going to see more expected from us in our jobs, but also more support to automate some aspects of the work. The technology is still not showing deep insight, but its broad competence makes it a powerful tool for many practical problems.