Interpreting Your Course Feedback

Based on the recommendations of the 2017-18 Committee on the Student Course Review Instrument, Brown has adopted a new course feedback system this fall. While the system is housed in the College, with assistance from OIT, this Sheridan Center newsletter offers guidance for instructors to interpret the feedback offered by the new form’s questions. 

Key changes in the new course feedback system include:

  • The course feedback questions were revised to focus on specific instructional behaviors (rather than relying solely on a student’s holistic rating of the instructor) and students’ reflection on their learning, both cited in other personnel studies and ratings reports as mechanisms to mitigate bias towards the instructor (Bauer & Baltes, 2002; Yale College Teaching and Learning Committee, 2016). 
  • For the first time, graduate teaching assistants will also have access to more robust feedback for use in teaching portfolios and the academic job market. Undergraduate teaching assistants also will receive feedback at the discretion of the course instructors.
  • The feedback system will open before the end of the term, allowing instructors to encourage students to complete in class to encourage higher response rates.
  • Faculty now have the access to create custom questions more easily.
  • The scale is now in ascending order (e.g., 1=Strongly Disagree to 5=Strongly Agree).

An FAQ guide to the revised form and system (EvaluationKit) can be found on the College website.

How should I interpret the quantitative questions? 
In EvaluationKit, the “Detailed Report + Comments” will provide instructors with information on means (i.e., averages), medians (i.e., midpoints), as well as distributions (i.e., the percentage of students selecting each response choice). Most student ratings researchers suggest that distributions offer the most valuable lens on course feedback (e.g., Arreola, 2007; Linse, 2016; Ory, 2006). For example, looking at the percentage of students who agree or strongly agree with specific statements provides more meaningful data to interpret when compared to a numerical mean between 0 and 5.

When accessing your own feedback, first, take a look at where the majority of your scores are located (Ory, 2006). For each item, if a large percentage of students are in agreement (i.e., percent strongly and somewhat agree), this is a signal that, in general, the student experience of that aspect of the course was positive. On the other hand, because most faculty nationwide receive ratings that lean toward the upper end of the scale, many scores of “strongly disagree,” “somewhat disagree” or “neutral” are a signal that students perceived significant challenges with the course or instructor (Arreola, 2007). In this scenario, see “How can I improve student feedback in future terms?” below.

Instructors may also wish to take note of contextual factors associated with the class, e.g., size, level, or student-reported reasons for taking the course. Instructors of elective courses, graduate courses, and small classes tend to receive higher ratings, although the impact tends to be small (Arreola, 2007; Benton & Cashin, 2012; Linse, 2016). Additionally, although the literature is quite complex and evolving, studies have examined how interactional factors involving instructor identity have some impact on ratings, including career stage, identity (or perceived identity) in relationship to course content, and alignment between a student’s identity and their perception of the instructor’s identity (e.g., Basow & Martin, 2012; Linse; Mengel, Sauermann, Zölitz, 2018). 

How should I interpret comments?
Many instructors find comments to be the most helpful component of student feedback because the narrative can offer specific actionable suggestions – or affirm what students report is working well for their learning. Accordingly, the new Brown course feedback system offers more options for students to comment on specific aspects of a course. Examples of these questions include, “What specific advice would you have for this instructor about changes that would enhance your learning?” and “What would you like to say about this course to a student who is considering taking it in the future?”

However, comments can also be frustrating, especially because contradictory comments are common (Linse, 2016). Additionally, compared with quantitative feedback, one drawback with comments is that it is easier to focus on negative issues or isolated student experiences. This is particularly important for comments given to female faculty and faculty of color because “we often see students with biased views represented in the tails of the distribution” (Linse, p. 9).

If there are few comments, it may be helpful to encourage future classes to offer narrative feedback, giving examples of comments that have been useful to your teaching in the past.

Brown undergraduate Isabel Reyes (Cognitive Neuroscience, ‘21) suggests that instructors can give their students guidance about the usefulness of open-ended comments, such as, “When providing feedback, give concrete examples of things that could have been done better or offer your own strategies. Pretend that you are speaking directly to your teacher.”

Helpful strategies for interpreting comments include:

  • Ask someone else (e.g., a colleague, a Sheridan staff member) to read through your comments to identify patterns. Often, external readers can more easily see trends, rather than focus on negative comments or isolated feedback, and offer suggestions based on these patterns.
  • Group comments into themes and count how many comments are associated with each theme (Linse, 2016). Higher frequency themes should receive more attention.

How can I improve student feedback in future terms?
Global items (e.g., “Overall, I rate this course as effective” and “Overall, I rate this instructor as effective”) often draw the most attention but can be difficult to interpret. If these are low, check the questions that ask for feedback on clarity (“The instructor made course material clear…”) and organization (“The instructor was well prepared for each class…”). These dimensions tend to be the best ways to explain variation in global questions (Feldman, 2007). 

Educational researcher Harry Murray (2007) studied observable teaching behaviors that are associated with these two dimensions of feedback. His research suggests that to improve students’ sense of clarity, small changes to consider include:

  • writing key terms on the board
  • giving multiple examples of a concept
  • pointing out practical applications of a concept.

Teaching behaviors that are associated with student perceptions of organization include (Murray):

  • providing an outline or agenda for each class
  • noting when you are transitioning from one topic to another
  • noting sequence and connections, or highlighting how each course topic fits into the class as a whole
  • periodically stopping to review (or ask students to review) key course topics

Finally, a number of studies suggest that collecting early student feedback has a positive impact on end-of-term ratings, and meeting with a consultant to discuss the feedback has an even greater impact (Cohen, 1980; Finelli, et al., 2008; Marsh, 2007; Penny & Coe, 2004). The Sheridan Center offers an early feedback service that combines a discussion with the faculty member, an observation, and a brief mediated focus group discussion with students. For more information, please see the Sheridan website.

How should I understand my response rates?
Even a few returned course feedback forms may offer valuable insights into student experience of a course, but generally, the higher the response rate, the more assurance you can have that your quantitative feedback is consistent. Recommended response rates vary by size of the course: 

Class Size Recommended minimum response rate
5-20 students 80%
21-30 students 75%
31-50 students 66%
51-100 60%
101+ 50%

(Franklin, 2001)

EvaluationKit enables you to monitor response rates on an ongoing basis, so if your response rates seem particularly low, an email reminder can be helpful.

I do two things to encourage students to complete course feedback surveys:
1. I give them a series of short feedback surveys throughout the semester (they get points toward their final course grade for completing them), plus I set aside time during the final course meeting hour for them to do a special course-specific survey on the details of the course.  This gives them the impression that surveys matter.
2. At the final class meeting I ask them to do the general university course survey and remind them that they have benefited from the comments of earlier students so it's time for them to return the favor and help future students benefit from their own experience in the course.
-Professor Kathy Spoehr, Department of Cognitive, Linguistic, and Psychological Sciences
 

What can you do if your response rates are low this term but you seek to increase them in the future? Helpful evidence-based approaches to use in future terms include describing to students how you have changed the course based on prior student feedback (Ory, 2006). Additionally, response rates tend to improve if instructors ask students to bring a mobile device at the end of the term and provide time for them to complete the feedback at the beginning of class (Goos & Salomons, 2017). One Brown Chemistry senior adds, “An instructor can influence you to send feedback by simply giving time to fill it out. Most students do not believe there is an extra 15 minutes in a day (we know logically there is, but it’s hard to motivate) and so giving the time in class forces you to do it.” 

In addition to telling students how important course evaluations are for the university, the department and myself as an instructor, I have found it effective to complete them both in class and at the beginning of a class period instead of at the end. On the day we do evaluations, students bring their computers/tablets/phones to class and navigate to the evaluation forms at the start of class. Then I leave the room for 15 minutes to ensure anonymity. Then I come back in and we have our regular class. I believe that in this way students also write more detailed responses.

-Professor Georgy Khabarovskiy, Department of French Studies

For a teaching portfolio or narrative, what additional information should I consider?
Teaching is multidimensional, meaning that a number of activities form the foundation of excellent instruction (Theall & Arreola, 2006). Although student course feedback offers valuable student-driven input on some of these, students are not the best source of information for other dimensions, such as content expertise. Therefore, instructors who are writing a teaching narrative or putting together a teaching portfolio will wish to complement the course feedback with other perspectives. Some possibilities are offered below. 

If you want to offer information about…

Consider using this approach…

Content expertise

Instructional delivery

A peer observation conducted by a colleague. (See guidelines from the Office of the Dean of the Faculty here.)

Student learning

Analysis of student work

Instructional design

Review of syllabi (Rubrics are available on the Sheridan Center website)

Mentoring and advising

Letters from former mentees and advisees

Instructional innovation

Teaching narrative

While these data do not have as much research behind them about questions of reliability, validity, or bias, a multimodal approach offers important information about other components of effective teaching (Linse, 2016).

Additionally, for any personnel-related process like a job search or review, it is important to provide (when possible) the history of scores over time, in order for a reviewer to identify patterns (Franklin, 2001). To allow for this historical presentation, the new feedback form retains identical wording of two key global items (“Overall, this is an effective course” and “Overall, this is an effective instructor”).

Generally, comparisons of ratings – such as with a department average or another instructor – are to be avoided because of the possibility of drawing erroneous conclusions. However, if an instructor needs to present such a comparison, Franklin (2001) offers a step-by-step guide to statistical calculations that offer a more accurate comparative approach (e.g., confidence intervals). 

Even with this comparison, the most important focus should be on the descriptor associated with the numerical score, rather than small differences in means. For example, although an average score of 4.5 is higher than 4.0, both scores correspond to an “agree” response – which is the descriptor students used when they filled out the survey (Dewar, 2017).

Next Steps

For a confidential consultation on your course feedback, please contact: [email protected].

For technical support about course feedback-related questions, please email: [email protected]

For other FAQs, please see this page.

To subscribe to the monthly Sheridan newsletter, please see this page.

References

Arreola, R.A. (2007). Developing a comprehensive evaluation system: A guide to designing, building, and operating large-scale faculty evaluation systems, 3rd ed. San Francisco: Anker.

Basow, S.A., & Martin, J.L. (2012). Bias in student evaluations. In M.E. Kite, Ed. Effective evaluation of teaching: A guide for faculty and administrators (pp. 40-49). Society for the Teaching of Psychology. Available: http://teachpsych.org/ebooks/evals2012/index.php

Bauer, C.C., & Baltes, B.B. (2002). Reducing the effects of gender stereotypes on performance evaluations. Sex Roles, 47(9/10): 465-476.

Benton, S. L., & Cashin, W. E. (2012). Student ratings of teaching: A summary of research and literature (IDEA Paper no. 50). Manhattan, KS: The IDEA Center. Retrieved from https://ideaedu.org/wpcontent/uploads/2014/11/idea-paper_50.pdf

Cohen, P. A. (1980). Effectiveness of student-rating feedback for improving college teaching: A meta-analysis of findings. Research in Higher Education, 13: 321-341.

Dewar, J. (2017). Student evaluations ratings of teaching: What every instructor should know. American Mathematical Society. Available: https://blogs.ams.org/matheducation/2017/04/17/student-evaluations-ratin...

Feldman, K.A. (2007). Identifying exemplary teachers and teaching: Evidence from student ratings. In R.P. Perry and J.C. Smart, Eds. The scholarship of teaching and learning in higher education: An evidence-based perspective (pp. 93-). Springer.

Finelli, C. J., Ott, M., Gottfried, A.C., Hershock, C., O’Neal, C., & Kaplan, M. (2008). Utilizing instructional consultations to enhance the teaching performance of engineering faculty. Journal of Engineering Education: 397-411. 

Franklin, J. (2001). Interpreting the numbers: Using a narrative to help others read student ratings of your teaching accurately. New Directions for Teaching and Learning, 87: 85-100.

Goos, M., & Salomons, A. (2017). Measuring teaching quality in higher education:  Assessing selection bias in higher education. Research in Higher Education, 58: 341-364.

Linse, A. (2016). Interpreting and using student ratings data: Guidance for faculty serving as administrators and on evaluation committees. Studies in Educational Evaluation, 

Marsh, H.W. (2007). Students’ evaluations of university teaching: Dimensionality, reliability, validity, potential biases and usefulness. In R.P. Perry and J.C. Smart, Eds. The scholarship of teaching and learning in higher education: An evidence-based perspective (pp. 319-). Springer.

Mengel, F., Sauermann, J., Zölitz, U. (2017, September). Gender bias in teaching evaluations. IZA Institute of Labor Economics Discussion Paper Series. 

Murray, H. (2007). Low-inference teaching behaviors and college teaching effectiveness: Recent developments and controversies. In R.P. Perry and J.C. Smart, Eds. The scholarship of teaching and learning in higher education: An evidence-based perspective (pp. 145-183). Springer.

Ory, J.C. (2006, July). Getting the most out of your student ratings. Association for Psychological Science.

Penny, A. R., & Coe, R. (2004). Effectiveness of consultation on student ratings feedback: A meta-analysis. Review of Educational Research, 74(2): 215-253.

Theall, M., & Arreola, R.A. (2006). The meta-profession of teaching. Thriving in Academe, 22(5): 5-7. 

Yale College Teaching and Learning Committee. (2016). Report of the Teaching and Learning Committee, 2015-16: Recommendations to revise the Yale College online course evaluation survey. (unpublished report). New Haven, CT.