Tuesday, March 15, 2011

Professor Evaluations (by anon)

The main problem I want to speak on here at UW-Whitewater is the
improvement of the professors.  Whitewater has many well-respected professors
but there are also some who are not.  The main argument I want to address are
the professor evaluations, which the students are required to fill out at the end of
each semester.  I think these evaluations are biased based on the questions that are asked.  I was told that some teachers can pick the 20 questions from a set of 100 that are to be placed on their evaluation sheet for the end of the semester. If this is true, it provides a perfectly good explanation why some of these professors are still employed here at Whitewater. They pick the questions, which will provide a positive outcome for the evaluation, and hence will improve their credibility.

There were some professors that I have had in the past that I would not recommend to another student, but when it was time for evaluations, all of the questions were focused on answers for positive feedback.  For instance, on every evaluation there is the question that states, “Was your professor well prepared?” or “Did you professor have a good understanding of the topic?”  The answer to both of these questions might very well be yes but that does not mean that they were effective in teaching the subject.


  1. The assumptions in your piece are incorrect. First, evaluations are not used to fire profs but to improve teaching. After the semester ends, evaluations are returned to profs, and many profs do revise their courses based on evaluations.

    What about student evaluations? Some of them are so off the wall. Don't take my word for it. Go to the student evaluation site, my prof sucks, and read some of the things students say about profs. They say some real mean things. But you don't have to go online, read some of the mean comments they make about profs here.

    A student who is moving toward receiving a F grade in a class will rarely give a prof a good evaluation. What about the rather large number of students at Whitewater that never attend class. Is it fair that they get the chance to evaluate their prof? How would they know?

    It is not fair that these students who happen to attend class on the day student evaluations are handed out get to complete an evaluation on their professors. They were never in class! In short, the door swings both ways. There are also questions students are simply not competent enough to give an opinion on.

  2. I've never heard of individual professors choosing what questions will appear on their evals. Usually departments do that.

    The most important question on those evaluations tends to be something like this: How would you rate the overall performance of your instructor? And then there is some rating system you use. The scores for that question are very important in judging how good an instructor is. But of course one cannot rely just on that since student perceptions may be wrong. There is also peer review as well as syllabus review.

    What I find interesting, however, is that ratemyprofessor.com is usually spot on, or at least it is a good indicator of what kind of instructor a person is.

    Maybe there should be a site that is called, ratemystudent.com, where instructors can report on their students so that if problems come up with a current student, I can look the person up on the site and maybe get some understanding or see if there is a pattern.

  3. Good point but evaluations are subjective and inaccurate at best. I have always been skeptical of any kind of anonymous evaluations. At the very least the grades of students should be linked to their evaluations of profs. Math profs usually have to lowest evaluations because the failure rate in these courses is extremely high. Difficult subjects usually mean lower evaluations. I suspect that good looking profs (not many) get higher evaluations. On the internet sites they get some kind of symbol indicating that they are hot.

  4. One type of evaluation you might be unaware of is grade comparisons. This is done behind the scenes in some departments. Let's says Prof. Jones has 20 students. Those 20 students had a cumulative gpa of 2.74 coming in. In Jones class alone, did the group measure higher, lower, or about average compared to their academic history. If the group averages a 2.74 cum gpa, but let's say the group got a 1.4, 2.8, or 3.9 gpa average for just Jones' class, we can make some assumptions. Data is a good thing.