Graphic by Matt Hampton
Graphic by Matt Hampton

The truth behind course evaluations

Professors grade us, but we grade them too. How does it work?

May 14, 2020

Rating things is a part of daily life now, and professors are no exception.

When people give feedback on everything – our Uber drivers, restaurant experiences on Yelp, products we buy on Amazon – it seems logical that in the education system, too, colleges ask students to evaluate professors. Students also use RateMyProfessors.com as a Yelp for educators.  

But are these reviews a useful way to evaluate professors?  If not, what is the alternative?  And at Lindenwood, how do course evaluations really work?

Do good reviews mean good professors?

Several studies have indicated that student evaluations of professors are not an accurate measure of how well they teach, and often include biased or arbitrary reviews.

Research by economists Scott Carrell and James West uncovered a seeming paradox:  that students will give worse ratings to professors who benefit their learning in the long term.  

They looked at the U.S. Air Force Academy, which provided useful conditions for their study because all students were required to take a set of math and engineering courses starting with Calculus I, and they were graded based on standardized tests.  Because of this, Carrell and West could track how students performed in these classes over the semesters.  

Professors who got good reviews tended to have students who got good grades on their tests, but instructors with students who did well in the next courses in the subject tended to get negative reviews. Their paper suggested this may be because students preferred professors who “teach to the test” rather than pushing them towards deeper learning that will be helpful in future courses.  

“The bottom line with student evaluations is they’re not a good predictor of student learning,” said Carrell, a professor at the University of California-Davis. “They’re a good predictor of whether the students like the professor, but that can be for many reasons.”

The bottom line with student evaluations is they’re not a good predictor of student learning

— Scott Carrell, economist, UC-Davis

West, who taught at USAFA but is now at Baylor, said colleges rely heavily on student evaluations because they provide numerical data that can be easily used to compare professors, but this data is flawed.  West also said he found strong correlation between positive evaluations and what time of day a class took place.   

Other research has also found that student reviews are unreliable measures of teacher quality.  And a French study from 2017 indicated that students perceived male professors to be better than women, even if objective measures showed them to be equally effective.  

Lindenwood’s provost, Marilyn Abbott, said students can be biased based on a professor’s age or physical attractiveness.  

“That’s true in a lot of aspects of life,” she said. “So, a bad evaluation instrument focuses too many questions on things that can be affected by that kind of attitude.  So, […] we want a system that is focused on the course itself:  ‘What kind of activities went on in the class?’ ‘Which ones did you like, which ones did you not like?’”

Sometimes, students also leave irrelevant reviews about how an educator looks or that they don’t like the subject.  

Abbott said in her experience, frivolous or malicious comments are usually one-offs, but Lindenwood would act on it if it becomes a pattern for a certain professor.  

But West and Carrell said despite all the flaws of student evaluations, they are still useful, because written comments can provide constructive feedback.  But they shouldn’t be used as the sole way to grade professors, because the ‘customer’ isn’t always right.

 “From an economic perspective, the students are the customer, but often what they evaluate as good isn’t the entire picture,” Carrell said. “Learning is the ultimate objective, at least in my opinion.”

What are the solutions?

Research has pointed out problems with course evaluations, but there must be some way to measure teaching quality.

West and Carrell said student evaluations should be combined with faculty assessing each others’ performance.  

Other ways to measure teaching include looking at students’ performance in follow-up courses, like they did in the USAFA study.  

“If a professor doesn’t have very stellar evaluations because they’re teaching chemistry, and chemistry is a class that generally has low evaluations, well, look to see how they do in follow-on chemistry classes,” Carrell said.  

However, that may not be practical in an environment where students may not take more than one course in a field, he said, and it is hard to make sure grades are objective.  

How does Lindenwood use course evaluations?

Abbott said students often think course evaluations don’t matter because “if they complain about somebody, they’re not instantly gone,” but she said Lindenwood pays close attention to professors’ comments and ratings.  

“If faculty members are consistently getting poor evaluations, it’s an issue, and we have, and do, let people go, eventually, if they just can’t turn it around,” she said. 

However, they keep in mind the problems with evaluations and, besides focusing questions on the course, not the instructor, they consider a lot of other factors, too.

“Course evaluations, although they contribute to a faculty evaluation score, they’re not a major contributor in and of themselves,” Abbott said. 

She said that in the past several years, Lindenwood developed a tool for faculty evaluations that involves professors submitting evidence of their work in service, scholarship, and teaching: the most important factor, for which evidence can include ways they have changed their class to make it more effective.  

As for improving course evaluations, a task force has suggested changes which will probably go into effect in Fall 2020, Peter Weitzel, director of Institutional Research, said. 

The currently used surveys were created by Lindenwood, and some aspects of them are “not ideal,” he said, so the university plans to switch to a new survey backed by research which assesses factors about courses from group interaction to assignments and examinations. 

They also are considering adding specific questions for non-traditional courses, such as art studios and lab classes, and creating guidelines for professors to use feedback to improve their classes.  

But does anyone do course evaluations?

West said that since schools started doing student evaluations online, participation rates have gone down.  This has made them even less reliable because of selection bias and low sample sizes.  

“The people who choose to complete the form are either people who are really mad and want to get back at their professor, or they really love them, so you get the two extremes, and you’re not really getting the opinions of people in the middle,” he said. 

Abbott said that before Lindenwood used Canvas, they would encourage students to fill out evaluations by holding their final grades for up to five days.  Now that grades are available instantly on Canvas, the response rate has fallen from around 80% to around 55%, depending on the class. 

Weitzel said the response rate for graduate courses is higher, around 70%.

Are they anonymous?

Abbott said Lindenwood professor evaluations are anonymous.  Faculty and administrators cannot see students’ names on their evaluations, but they do receive their average ratings for each section and a list of comments, regardless of how small their class is.  

However, Weitzel said that because of concerns about anonymity in small classes, Lindenwood is likely to start excluding classes from evaluations if they have below a certain number of students.  

Does Lindenwood look at RateMyProfessors?

Though evaluations are kept private, students can post public reviews of their professors on RateMyProfessors.com, where other students can look to see if they want to take their class. 

Abbott said Lindenwood does not use reviews on RateMyProfessors to evaluate faculty because they are not valuable.

“If students have an axe to grind, they’re usually pretty frank in the regular evaluations,” she said.

This article is part of the Spring 2020 Featured Stories, which were planned for Link magazine before campus closed.  For more info and to see the other stories, see here

Leave a Comment
Donate to Lindenlink
$575
$1000
Contributed
Our Goal

Your donation will support the student journalists who produce Lindenlink. Your contribution will help to cover our annual website hosting costs.

Lindenlink • Copyright 2024 • FLEX WordPress Theme by SNOLog in

Donate to Lindenlink
$575
$1000
Contributed
Our Goal

Comments (0)

All Lindenlink Picks Reader Picks Sort: Newest

Your email address will not be published. Required fields are marked *