OP-EDS

  /  

January 26, 2010

Student-friendly evaluations, please

Online evaluations need to be more useful in course selection

At the end of every quarter, when I sit down to fill out evaluations for my classes, I always pause and wonder for whom I’m writing. Am I writing an evaluation of the class, trying to communicate the weaknesses to a professor who might change the course for the next batch of students unlucky enough to have a 600-page, unnecessary, and irrelevant tome assigned over two days? Or am I warning future students of a class that I think would be better suited for someone who prefers lecture-style seminars and has an ability to read huge swathes of texts on command? Right now, I think the evaluations attempt to cater to both types, which is admittedly difficult and consequently fails miserably. And how else, aside from these evaluations, can students decide whether a class will suit their personalities, when neither time schedules nor the course catalog offer more than a quick paragraph detailing the course’s goals and possible reading topics? Is there a way to improve both systems to provide a more complete picture of a class to reduce the amount of guesswork involved in bidding for classes?

What professors want to hear and what students want to know are two very different things. I want to know what the reading schedule looks like, how heavily papers are weighed and how frequently they are assigned, and how helpful the instructor is in reviewing topics and drafts. I want to know if the instructor is articulate and can clearly explain his position, and I want to know whether the class is enjoyable. The driest topic in the world can become fascinating with the right professor, and even if you have the syllabus in hand before the course begins (which you often don’t, making bidding and shopping for classes especially tricky), you have no idea how the instructor will present the material. Is he passionate? Enthralling? Does he lecture from the top of a trashcan? While these questions are ostensibly addressed in the first question of the course evaluation survey (“What are the instructor’s strengths? Weaknesses?”), I am always seeking a more detailed picture of who the instructor is, and how he teaches. On the other hand, I’m sure professors are looking for more of an evaluation of their achievements in terms of moderating class discussions, keeping the class interested and alert (especially for those 9:30 AM Monday/Wednesday/Friday classes), and giving feedback on the usefulness of specific texts and sections of study, information that is just not useful to the average student looking for next quarter’s classes.

Most of the questions on the survey seem to be attempting to satisfy both students and teachers, which I think is a mistake. Professors who want student evaluations should feel free to leave copies of evaluation papers with their students, and one individual could collect them and seal them in an envelope. These envelopes will be left with a department head who will ensure that no one opens them until grades have been submitted. This way, professors will receive honest feedback and answers to the specific questions they have asked in regards to their classes, and students will be able to focus on writing the online evaluations for future students interested in the course. There are, after all, instructors who seem to never even look at the reviews of their classes online and never change anything about their courses at all, and these professors would not need to burden their students with any sort of “pretend evaluation” anyway. These questions could then be modified to be more helpful to the student looking for courses during bidding week; questions regarding the helpfulness of teaching assistants could be removed, as well as the usefulness of various texts, and how the course has contributed to your education (as that question usually garners sarcastic answers ranging from “greatly” to “on some level, yes” or “gave me a new world view” which are completely unhelpful). Furthermore, the section devoted to “rating” the class could be completely omitted—those numbers are not useful to a student looking for a complete picture of what the class is like. If the evaluation form were trimmed down to just a few pointed paragraphs and summary sections, more students might be encouraged to spend the ten minutes necessary to describe the class for future bidders.

Of course, these evaluations alone can’t describe a class in its entirety—questions asking about the reading schedule, for example, might promote the copying and pasting of syllabi that may be updated every time the class is taught. The most effective way of communicating information such as the average amount of reading per week—given that how many hours are spent on the course a week depend on one’s reading pace—would be to imitate Princeton’s format for displaying classes. If you go to Princeton’s registrar’s website and click on Course Offerings, you can search the corpus of classes based on meeting time, distribution area, subject, course level, catalog number, course title, or instructor—unlike our incredibly incomprehensible timeschedules website. Furthermore, upon selecting a class, you can read that “Literature of the Fin de Siècle,” taught by Meredith A. Martin, will have 100 to 250 pages of reading a week, two 5-6 page papers, one 8-10 page paper in lieu of a final, and one in-class presentation. If you like, you can even note the precise grade breakdown and a sample reading list. This would completely redefine the Sunday before bidding week, during which all of my friends post helpless questions about professors on their Facebook pages hoping that someone will give them a coherent picture of what the class is actually like.

After all, we can’t all shop for every single class we want to take. There is only one 10:30-11:50 slot on Tuesdays and Thursdays.

— Eliana Pfeffer is a second-year

in the College.