Home

About Classroom Surveys:
Our Beautiful Dark Twisted Fantasy*

Select a survey from the lower left menu bar to view.


In 2008, the faculty at The University of Tampa had a decision to make: whether or not to support a proposal to post all faculty classroom surveys online for the public to see. Many institutions (including one of my alma maters, Brown University) had been posting surveys publically, and a group of U.T. faculty believed that such publication would honor the student input we all find to be invaluable.

Like other passionate teachers, I had always cared deeply about my classroom survey results. Students inspired me to continually improve my courses and develop new experiences that would take them places they had never been before, in preparation for the future they were dreaming about. It was an honor to be part of their lives. I cared about their thoughts, feelings, experiences.

At the same time, abundant research indicated the flaws and downsides of classroom surveys (see “The Naked Emperor”). Many faculty (I was one of them) were concerned that published surveys would fail to provide the “full story” of a class to the public (e.g., quality of course content, learning outcomes, grade distributions, and many other critical factors that classroom surveys don’t reveal). “It’s better than RateMyProfessor.com,” others countered.

I felt obligated to both students and faculty to explore the topic fully, as I was serving as a faculty senator and member of the university’s teaching effectiveness task force: a group of faculty appointed to research and make recommendations for a new classroom survey and data management system.


The Naked Emperor


Our research would initially lead us to some unexpected findings:

indications that classroom surveys "seem to be as much a measure of an instructor's leniency in grading as they are of teaching effectiveness" (http://home.sprynet.com/~owl1/sef.htm);

the famous “Dr. Fox” experiment, which showed that a completely incoherent lecture delivered by a charismatic actor would win positive reviews from students, while teachers with strong content who were not as charismatic were rated lower (http://www.er.uqam.ca/nobel/r30034/PSY4180/Pages/Naftulin.html);

evidence that many professors "dummy down" courses in order to receive higher evaluations (http://home.sprynet.com/~owl1/sef.htm);

claims that student evaluations reflect bias--against women, minorities, non-native speakers, older professors, etc. Researchers say that some means of compensating or filtering for this bias are available (http://www.insidehighered.com/news/2007/01/29/evaluate);

research that indicates classroom surveys do not reliably predict learning outcomes (http://www.news.cornell.edu/releases/Sept97/student.eval.ssl.html);

AAUP caveats and reccomendations: “Beyond the concerns about the interpretation of numerical data, a growing body of evidence suggests that student evaluations create pressures that work against educational rigor. Rather than exclusively measuring teaching effectiveness, evaluations tend also to measure the influence of personal style, gender, and other matters extraneous to the quality of teaching” (http://www.aaup.org/AAUP/comm/rep/teach-eval-obs.htm).

a Cornell model that predicts that "in the presence of grade information students will tend to enroll in leniently graded courses and that this compositional effect will contribute to grade inflation" (http://www.ilr.cornell.edu/cheri/wp/cheri_wp61.pdf);

research that consistently shows that the use of classroom surveys has substantially contributed to grade inflation in recent decades--most notably the comprehensive Duke University survey (http://www.springer.com/statistics/social/book/978-0-387-00125-8);

Research that shows pandering and grade inflation do work well in raising course evaluation scores (e.g., Rice 1988; Wilson 1997; Huemer 2005--http://home.sprynet.com/~owl/.sef.htm).


The Ramifications


Classroom surveys are the primary tools that most colleges and universities use to determine teaching effectiveness, and they heavily impacted decisions about faculty hiring, tenure, and promotion. They are often even referred to as "teaching evaluations," rather than "student surveys." Suddenly, we found ourselves asking whether classroom surveys should be used at all. At the same time, we knew how important student feedback was to the continual improvement of teaching practice.

Solution?

We set out on a daunting task: to create a classroom survey that would be “valid” and “reliable” by research standards, and would be considered highly useful by the faculty and students. A group of scientists, social scientists, artists, writers, and marketing specialists spent over a year just to complete this work alone. Then, the survey was piloted among faculty who wanted to participate. The results of the initial pilot were presented at The American Association of Behavioral and Social Sciences conference, 2007 (see conference paper), and the final survey was adopted by U.T. (see a sample of our current survey). In addition to creating the new classroom surveys, the task force recommended multiple tools for evaluating teaching effectiveness (e.g., student outcomes review, peer observations, course content reviews, grade distribution, and other data). By the time the task force had dissolved, it had been active for seven years.

How The U.T. Survey Is Different From Most

Most classroom surveys ask for evaluative feedback (e.g., to rate the “pace of the class”), but this survey also asks students formative questions (e.g., how fast or slow they perceived the pace to be—and then, how beneficial that pace was to their learning). Otherwise, it would be impossible to know what is meant by a good or bad pace and how to make improvements. Students are also asked to make comments on individual questions, rather than saving thoughts for the end. The result is specific feedback about select aspects of the course and professor, in relationship to how helpful they were to the student's "learning."

Responses

Faculty members nearly unanimously responded with positive feedback to the piloting of the new survey. “It’s like seeing in 3D,” one member commented. Students were equally enthusiastic about the quality of input they were invited to provide. With some caveats, we believed we had developed an instrument that would provide students with an avenue for more meaningful input.

To Post or Not to Post

Now that we have this survey tool (along with a comprehensive digital delivery system), the question still remains: Should we post the surveys online for the public to see?  The faculty senate felt the decision should be left up to individual faculty members.

So this is my experiment. With my students’ help, I am continuously changing my classes in response to their feedback, and I discover new ways to create learning experiences and help the next group connect with the experiences. Hopefully, these postings will honor that process and provide a bigger picture than RateMyProfessor.com.

At the same time, I worry that students might read these and get the impression that voters do when they decide not to turn out at the voting booths (“Why bother?” or “That’s already been said, so I don’t have to.”). Our surveys are voluntarily completed at U.T. I’m trusting students to participate just as fully now as they have in the past, and to know how much I value everything they have to say, in every class, every semester. Thank you for your input.

___________________________________________________________


* Reference to Kanye West’s “My Beautiful Dark Twisted Fantasy” CD (2010), in which the artist expresses a manic combination of pride, remorse, and cultural critique--released after his meltdowns that followed the public dissing of Taylor Swift.

Comments?