Interpreting Your Course Feedback

Course feedback is one important component of the evaluation of teaching, providing instructors information about the student experience and offering chairs opportunities to recognize, reward, and support faculty colleagues. Please see this FAQ for common questions about using and interpreting course feedback surveys.

How do I navigate Brown's course feedback system?
A FAQ guide to Brown's course feedback system (EvaluationKit) can be found on the College website.

How should I interpret the quantitative questions? 
In EvaluationKit, the “Detailed Report + Comments” will provide instructors with information on means (i.e., averages), medians (i.e., midpoints), as well as distributions (i.e., the percentage of students selecting each response choice). Most student ratings researchers suggest that distributions offer the most valuable lens on course feedback (e.g., Arreola, 2007; Linse, 2016; Ory, 2006). For example, looking at the percentage of students who agree or strongly agree with specific statements provides more meaningful data to interpret when compared to a numerical mean between 0 and 5.

When accessing your own feedback, first, take a look at where the majority of your scores are located (Ory, 2006). For each item, if a large percentage of students are in agreement (i.e., percent strongly and somewhat agree), this is a signal that, in general, the student experience of that aspect of the course was positive. On the other hand, because most faculty nationwide receive ratings that lean toward the upper end of the scale, many scores of “strongly disagree,” “somewhat disagree” or “neutral” are a signal that students perceived significant challenges with the course or instructor (Arreola, 2007). In this scenario, see “How can I improve student feedback in future terms?” below.

Instructors may also wish to take note of contextual factors associated with the class, e.g., size, level, or student-reported reasons for taking the course. Instructors of elective courses, graduate courses, and small classes tend to receive higher ratings, although the impact tends to be small (Arreola, 2007; Benton & Cashin, 2012; Linse, 2016). Additionally, although the literature is quite complex and evolving, studies have examined how interactional factors involving instructor identity have some impact on ratings, including career stage, identity (or perceived identity) in relationship to course content, and alignment between a student’s identity and their perception of the instructor’s identity (e.g., Basow & Martin, 2012; Linse; Mengel, Sauermann, Zölitz, 2018). 

How should I interpret comments?
Many instructors find comments to be the most helpful component of student feedback because the narrative can offer specific actionable suggestions – or affirm what students report is working well for their learning. Accordingly, Brown's course feedback system offers options for students to comment on specific aspects of a course. Examples of these questions include, “What specific advice would you have for this instructor about changes that would enhance your learning?” and “What would you like to say about this course to a student who is considering taking it in the future?”

However, comments can also be frustrating, especially because contradictory comments are common (Linse, 2016). Additionally, compared with quantitative feedback, one drawback with comments is that it is easier to focus on negative issues or isolated student experiences. This is particularly important for comments given to female faculty and faculty of color because “we often see students with biased views represented in the tails of the distribution” (Linse, p. 9).

If there are few comments, it may be helpful to encourage future classes to offer narrative feedback, giving examples of comments that have been useful to your teaching in the past.

Brown alum Isabel Reyes (Cognitive Neuroscience, ‘21) suggests that instructors can give their students guidance about the usefulness of open-ended comments, such as, “When providing feedback, give concrete examples of things that could have been done better or offer your own strategies. Pretend that you are speaking directly to your teacher.”

Helpful strategies for interpreting comments include:

  • Ask someone else (e.g., a colleague, a Sheridan staff member) to read through your comments to identify patterns. Often, external readers can more easily see trends, rather than focus on negative comments or isolated feedback, and offer suggestions based on these patterns.
  • Group comments into themes and count how many comments are associated with each theme (Linse, 2016). Higher frequency themes should receive more attention.

How can I improve student feedback in future terms?
Global items (e.g., “Overall, I rate this course as effective” and “Overall, I rate this instructor as effective”) often draw the most attention but can be difficult to interpret. If these are low, check the questions that ask for feedback on clarity (“The instructor made course material clear…”) and organization (“The instructor was well prepared for each class…”). These dimensions tend to be the best ways to explain variation in global questions (Feldman, 2007). 

Educational researcher Harry Murray (2007) studied observable teaching behaviors that are associated with these two dimensions of feedback. His research suggests that to improve students’ sense of clarity, small changes to consider include:

  • writing key terms on the board
  • giving multiple examples of a concept
  • pointing out practical applications of a concept.

Teaching behaviors that are associated with student perceptions of organization include (Murray):

  • providing an outline or agenda for each class
  • noting when you are transitioning from one topic to another
  • noting sequence and connections, or highlighting how each course topic fits into the class as a whole
  • periodically stopping to review (or ask students to review) key course topics

Finally, a number of studies suggest that collecting early student feedback has a positive impact on end-of-term ratings, and meeting with a consultant to discuss the feedback has an even greater impact (Cohen, 1980; Finelli, et al., 2008; Marsh, 2007; Penny & Coe, 2004). The Sheridan Center offers an early feedback service that combines a discussion with the faculty member, an observation, and a brief mediated focus group discussion with students. For more information, please see the Sheridan website.

How should I understand my response rates?
Even a few returned course feedback forms may offer valuable insights into student experience of a course, but generally, the higher the response rate, the more assurance you can have that your quantitative feedback is consistent. Recommended response rates vary by size of the course: 

Class Size Recommended minimum response rate
5-20 students 80%
21-30 students 75%
31-50 students 66%
51-100 60%
101+ 50%

(Franklin, 2001)

EvaluationKit enables you to monitor response rates on an ongoing basis, so if your response rates seem particularly low, an email reminder can be helpful.

I do two things to encourage students to complete course feedback surveys:
1. I give them a series of short feedback surveys throughout the semester (they get points toward their final course grade for completing them), plus I set aside time during the final course meeting hour for them to do a special course-specific survey on the details of the course.  This gives them the impression that surveys matter.
2. At the final class meeting I ask them to do the general university course survey and remind them that they have benefited from the comments of earlier students so it's time for them to return the favor and help future students benefit from their own experience in the course.
-Professor Kathy Spoehr, Department of Cognitive, Linguistic, and Psychological Sciences

What can you do if your response rates are low this term but you seek to increase them in the future? Helpful evidence-based approaches to use in future terms include describing to students how you have changed the course based on prior student feedback (Ory, 2006). Additionally, response rates tend to improve if instructors ask students to bring a mobile device at the end of the term and provide time for them to complete the feedback at the beginning of class (Goos & Salomons, 2017). One Brown Chemistry senior adds, “An instructor can influence you to send feedback by simply giving time to fill it out. Most students do not believe there is an extra 15 minutes in a day (we know logically there is, but it’s hard to motivate) and so giving the time in class forces you to do it.” 

In addition to telling students how important course evaluations are for the university, the department and myself as an instructor, I have found it effective to complete them both in class and at the beginning of a class period instead of at the end. On the day we do evaluations, students bring their computers/tablets/phones to class and navigate to the evaluation forms at the start of class. Then I leave the room for 15 minutes to ensure anonymity. Then I come back in and we have our regular class. I believe that in this way students also write more detailed responses.

-Professor Georgy Khabarovskiy, Department of French Studies

For a teaching portfolio or narrative, what additional information should I consider?
Teaching is multidimensional, meaning that a number of activities form the foundation of excellent instruction (Theall & Arreola, 2006). Although student course feedback offers valuable student-driven input on some of these, students are not the best source of information for other dimensions, such as content expertise. Therefore, instructors who are writing a teaching narrative or putting together a teaching portfolio will wish to complement the course feedback with other perspectives. Some possibilities are offered below. 

If you want to offer information about…

Consider using this approach…

Content expertise

Instructional delivery

A peer observation conducted by a colleague. (See guidelines from the Office of the Dean of the Faculty here.)

Student learning

Analysis of student work

Instructional design

Review of syllabi (Rubrics are available on the Sheridan Center website)

Mentoring and advising

Letters from former mentees and advisees

Instructional innovation

Teaching narrative

What are guidelines for chairs or directors to review course feedback?
Chairs and program directors receive reports for courses in their program/department, three days after the feedback period closes. For help in accessing these, please see this FAQ or email course_feedback@brown. Chairs may also approve additional administrative designates, such as a DGS or DUS, to have access. A regular read of course feedback after each term can help chairs or their designees (such as the DGS and DUS) improve the curriculum, support teaching improvement, and nominate instructors for Brown teaching awards. Even for an instructor who teaches only once (e.g., a visiting faculty member teaching one term at Brown), review of the course feedback can inform curricular decisions.

In reviewing numerical scores, the most important focus should be on the descriptor associated with the numerical score, rather than small differences in means. For example, although an average score of 4.5 is higher than 4.0, both scores correspond to an “agree” response – which is the descriptor students used when they filled out the survey (Dewar, 2017). Generally, comparisons of ratings – such as with a department average or another instructor – are to be avoided because of the possibility of drawing erroneous conclusions. However, if an instructor needs to present such a comparison, Franklin (2001) offers a step-by-step guide to statistical calculations that offer a more accurate comparative approach (e.g., confidence intervals). 

Narrative course feedback is also an excellent tool for chairs and directors to recognize, reward, and support their department colleagues, as well as to identify areas of improvement. A practice of reading student comments can help chairs or their designees contextualize feedback, recognize exemplary teaching approaches, and identify instructors in need of additional support. In general, for summative purposes, it is best practice to interpret narrative feedback in broad themes, rather than to focus on isolated comments (Linse, 2016). However, at times, a single comment may suggest additional steps, such as  the scenario described below.

What if I read a comment that suggests a potential incident of bias, harassment, or discrimination?
The instructions for Brown’s course feedback form (available on the College website) clearly indicate that it is not intended as a tool to report misconduct and direct students to use dedicated reporting tools. However, if a comment describes a potential incident of bias, harassment, or discrimination, university policy encourages that it be reported through this portal to the Office of Equity Compliance and Reporting and the Office of Institutional Equity and Diversity.

Brown’s course feedback form also instructs students to be mindful of prejudgements and to carefully compose comments to the instructor. However, if readers perceive a comment to be biased or discriminatory towards the instructor, instructors may also use the same portal to raise these comments for review. 

What are guidelines for faculty to review their TAs’ course feedback?
Faculty who teach with teaching assistants (both graduate and undergraduate TAs) receive reports for TAs, three days after the feedback period closes. For help in accessing these, please see this FAQ or email course_feedback@brown. Faculty are encouraged to read these reports and, as possible, discuss them with TAs to provide constructive feedback about strengths and areas of improvement. They provide valuable material to inform the written feedback that is required for all graduate TAs. Please see the sections above for guidance on interpreting comments and numerical ratings.


Next Steps
For a confidential consultation on your course feedback, please contact: [email protected].

For technical support about course feedback-related questions, please email: [email protected]

For other FAQs, please see this page.



Arreola, R.A. (2007). Developing a comprehensive evaluation system: A guide to designing, building, and operating large-scale faculty evaluation systems, 3rd ed. San Francisco: Anker.

Basow, S.A., & Martin, J.L. (2012). Bias in student evaluations. In M.E. Kite, Ed. Effective evaluation of teaching: A guide for faculty and administrators (pp. 40-49). Society for the Teaching of Psychology. Available:

Bauer, C.C., & Baltes, B.B. (2002). Reducing the effects of gender stereotypes on performance evaluations. Sex Roles, 47(9/10): 465-476.

Benton, S. L., & Cashin, W. E. (2012). Student ratings of teaching: A summary of research and literature (IDEA Paper no. 50). Manhattan, KS: The IDEA Center. Retrieved from

Cohen, P. A. (1980). Effectiveness of student-rating feedback for improving college teaching: A meta-analysis of findings. Research in Higher Education, 13: 321-341.

Dewar, J. (2017). Student evaluations ratings of teaching: What every instructor should know. American Mathematical Society. Available:

Feldman, K.A. (2007). Identifying exemplary teachers and teaching: Evidence from student ratings. In R.P. Perry and J.C. Smart, Eds. The scholarship of teaching and learning in higher education: An evidence-based perspective (pp. 93-). Springer.

Finelli, C. J., Ott, M., Gottfried, A.C., Hershock, C., O’Neal, C., & Kaplan, M. (2008). Utilizing instructional consultations to enhance the teaching performance of engineering faculty. Journal of Engineering Education: 397-411. 

Franklin, J. (2001). Interpreting the numbers: Using a narrative to help others read student ratings of your teaching accurately. New Directions for Teaching and Learning, 87: 85-100.

Goos, M., & Salomons, A. (2017). Measuring teaching quality in higher education:  Assessing selection bias in higher education. Research in Higher Education, 58: 341-364.

Linse, A. (2016). Interpreting and using student ratings data: Guidance for faculty serving as administrators and on evaluation committees. Studies in Educational Evaluation, 54: 94-106.

Marsh, H.W. (2007). Students’ evaluations of university teaching: Dimensionality, reliability, validity, potential biases and usefulness. In R.P. Perry and J.C. Smart, Eds. The scholarship of teaching and learning in higher education: An evidence-based perspective (pp. 319-). Springer.

Mengel, F., Sauermann, J., Zölitz, U. (2017, September). Gender bias in teaching evaluations. IZA Institute of Labor Economics Discussion Paper Series. 

Murray, H. (2007). Low-inference teaching behaviors and college teaching effectiveness: Recent developments and controversies. In R.P. Perry and J.C. Smart, Eds. The scholarship of teaching and learning in higher education: An evidence-based perspective (pp. 145-183). Springer.

Ory, J.C. (2006, July). Getting the most out of your student ratings. Association for Psychological Science.

Penny, A. R., & Coe, R. (2004). Effectiveness of consultation on student ratings feedback: A meta-analysis. Review of Educational Research, 74(2): 215-253.

Theall, M., & Arreola, R.A. (2006). The meta-profession of teaching. Thriving in Academe, 22(5): 5-7. 

Yale College Teaching and Learning Committee. (2016). Report of the Teaching and Learning Committee, 2015-16: Recommendations to revise the Yale College online course evaluation survey. (unpublished report). New Haven, CT.