Skip to Content

Student Assessment


Evaluating Case Discussion

Business school case teachers do it all the time. It’s not uncommon for them to base the final course grade on 50% class participation. And this with 50-70 students in a class! This sends shudders up the spines of most science teachers. Yet, what's so tough about the concept? We are constantly making judgments about the verbal statements of our colleagues, politicians, and even administrators. Why can't we do it for classroom contributions?

Most of our discomfort comes from the subjective nature of the act, something that we scientists work hard to avoid in our work-a-day world. It may be that we are even predisposed to become scientists because we are looking for a structured and quantifiable world. Flowing from this subjective quandary is the fact that we feel we must be able to justify our grades to the students. We are decidedly uncomfortable if we can't show them the numbers. This is one of the reasons that multiple-choice questions have such appeal for some faculty.

But let’s take a look at how the business school people evaluate case discussion. Some of them try to do it in the classroom, making written notes even as the discussion unfolds, using a seating chart, and calling on perhaps 25 students in a period. As you might expect, this usually interferes with running an effective discussion. Other instructors tape-record the discussion and listen to it later in thoughtful contemplation. Most folks, however, sit down shortly after their classes with seating chart in hand and reflect on the discussion. They rank student contributions into categories of excellent, good, or bad, or they may use numbers to evaluate the students from 1 to 4 with 4 being excellent. They may give negative evaluations to people who weren’t prepared or were absent. These numbers are tallied up at the end of the semester to calculate the grade. And that’s as quantified as it gets.

I especially like mathematician/philosopher Blaise Pascal's view of evaluation: “We first distinguish grapes from among fruits, then Muscat grapes, then those from Condrieu, then from Desargues, then the particular graft. Is that all? Has a vine ever produced two bunches alike, and has any bunch produced two grapes alike?” “I have never judged anything in exactly the same way,” Pascal continues. “I cannot judge a work while doing it. I must do as painters do and stand back, but not too far. How far then? Guess ....”


Assignments

The simplest solution to case work evaluation is to forget classroom participation and grade everything on the basis of familiar criteria, say papers or presentations. This puts professors back in familiar territory. Even business and law school professors use this strategy as part of their grades. I’m all for this. In fact, I always ask for some written analysis in the form of journals, papers, and reports. Along with an exam, these are my sole bases for grades. I don’t lose sleep over evaluating class participation.


Exams

You can give any sort of exam in a case-based course, including multiple-choice, but doesn’t it make more sense to have at least part of the exam a case? If you have used cases all semester and trained students in case analysis, surely you should consider a case-based test. Too often we test on different things than we have taught.


Peer Evaluation

Some of the best case studies involve small group work and group projects. In fact, I strongly believe teaching cases this way is the most user-friendly for science faculty and the most rewarding for students. Nonetheless, even some aficionados of group work don’t like group projects. They say, how do you know who’s doing the work? Even if they ask for a group project, they argue against grading it. They rely strictly on individual marks for a final grade determination. I’m on the other side of the fence. I believe that great projects can come from teams, and if you don't grade the work, what is the incentive for participating? Moreover, employers report that most people are fired because they can’t get along with other people. Not all of us are naturally team players. Practice helps. So, I’m all for group work including teamwork during quizzes where groups almost invariably perform better than the best individuals. But we have to build in safeguards like peer evaluation.

“Social loafers” and “compulsive workhorses” exist in every class. When you form groups such as those in Problem-Based Learning (PBL) and Team Learning (the best ways to teach cases, in my judgment), you must set up a system to monitor the situation. In PBL it is common to have tutors who can make evaluations. Still, I believe it is essential to use peer evaluations. I use a method that I picked up from Larry Michaelsen in the School of Management at the University of Oklahoma.

At the beginning of every course I explain the use of these anonymous peer evaluations. I show students the form that they will fill out at the end of the semester (Table 1). Then they will be asked to name their teammates and give each one the number of points that reflects their contributions to group projects throughout the course. Say the group has five team members then each person would have 40 points to give to the other four members of his team. If a student feels that everyone has contributed equally to the group projects, then he should give each teammate 10 points. Obviously, if everyone in the team feels the same way about everyone else, they all will get an average score of 10 points. Persons with an average of 10 points will receive 100% of the group score for any group project.

But suppose that things aren’t going well. Maybe John has not pulled his weight in the group projects and ends up with an average score of 8, and Sarah has done more than her share and receives a 12. What then? Well, John gets only 80% of any group grade and Sarah receives 120%.

There are some additional rules that I use. One is that a student cannot give anyone more than 15 points. This is to stop a student from saving his friend John by giving him 40 points. Another is that any student receiving an average of seven or less will fail my course. This is designed to stop a student from doing nothing in the group because he is simply trying to slip by with a barely passing grade and is willing to undermine the group effort.
Here are some observations after many years of using peer evaluations:

  • Most students are reasonable. Although they are inclined to be generous, most give scores between 8 and 12.
  • Occasionally, I receive a set of scores where one isn’t consistent with the others. For example, a student may get a 10, 10, 11, and a 5. Obviously, something is amiss here. When this happens, I set the odd number aside and use the other scores for the average.
  • About one group in five initially will have problems because one or two people are not participating adequately or are habitually late or absent. These problems can be corrected.
  • It is essential that you give a practice peer evaluation about one-third or one-half of the way through the semester. The students fill these out and you tally them and give the students their average scores. You must carefully remind everyone what these numbers mean, and if they don't like the results, they must do something to improve their scores. I tell them that it is no use blaming their group members for their perceptions. They must fix things, perhaps by talking to the group and asking how to compensate for their previous weakness. Also, I will always speak privately to any student who is in danger. These practice evaluations almost always significantly improve the group performance. Tardiness virtually stops and attendance is at least 95%.