PROVIDENCE, R.I. [Brown University] — As chatbots and other artificial intelligence technologies promise to forever alter the way people write, learn and work, universities nationwide are working to keep pace.
This July, Michael Littman, a professor of computer science at Brown University, started a new role on campus as the University’s first associate provost for artificial intelligence. Littman’s charge includes supporting AI-related research, expanding opportunities for students to engage with AI across a diverse array of disciplines and advising operational units on AI use and working with external entities to maximize the impact of Brown's AI research.
“I think Brown did a remarkably interesting thing in having someone keeping track of all of these different areas,” Littman said. “Other institutions have hired people to look after one or two of these areas, but I don't know of any others that are unifying all of them in a single role. It’s daunting, but I think it’s a good approach to look at things across the board like this.”
Littman brings substantial experience to the job. In addition to studying machine learning and AI for his entire decorated research and teaching career, he recently completed a three-year rotation as division director for information and intelligent systems at the National Science Foundation, where he oversaw an annual budget of $200 million in research funding in AI-related areas. In 2021, he chaired the One Hundred Year Study on Artificial Intelligence, a multidisciplinary, twice-per-decade study on the state of AI development.
After starting in his new role on July 1, Littman discussed his work and vision in an interview.
Q: You have a lot on your plate as you begin. What stands out as a priority area early in your tenure?
The use of AI in the classroom is an area that stands out. So far, I’m seeing a range of reactions across the University from people who are going all-in and being extremely creative with AI in their teaching, and people who really hope this all just goes away — which I don’t think is an option for any of us. We want to get input from faculty, staff and students on this, so we’ve formed a committee, which started meeting before I even began officially in my role. We issued some preliminary guidance last week, but we're targeting a more formal document by the end of the calendar year.
Q: What has the committee’s work looked like so far?
One of the things we’ve done is to assign everybody a book to read, because that’s what academics do when we need to get the lay of the land. That book is called “Teaching with AI,” and I really liked it. One of the big takeaways is that this technology is out there, and it’s really hard for students to not use it. It’s also really hard for us as educators to enforce not using it. So the question becomes: How do we incorporate it? The perspective the book takes is that if AI is going to produce C-plus or B-minus work on just about any assignment, then maybe we can’t give a B-minus for that level of work anymore. Maybe a B-minus is the new F. So now we have all this headroom above that where the student and the AI can work together to produce something better. I thought that was an interesting perspective, and it’s the kind of thing we’ll be working through with this committee.