- Classroom & Event Support
- Funding and Grants
- Learning Technologies and Consulting
- Tech Support for Classes & Events
- Video Services
- Exam Scan Service
A proposal from Tim McKay, David Gerdes, and August Evrard Department of Physics, University of Michigan The students we teach come to us from a variety of backgrounds, and bring to our courses a wide array of talents and divergent levels of interest. While the particular manner in which we teach has a strong effect on average student performance, it is these individual differentiating factors that drive the often broad range of student outcomes. Too often our attempts to assess the success of our teaching have ignored these factors; failing to acknowledge that some students are likely, no matter what we do, to perform much better, or much worse, than average. If we want to seriously study how best to help those who struggle most, or how to truly challenge those who excel, we need to know which students we ought to expect to fall behind and who is almost certain to succeed. Efforts to assess the performance of students in our courses rely on a variety of quantitative measures. Exam scores, homework performance, quizdom responses, CRLT scores, and final grades (among many others) provide opportunities to see how students are doing. Community wide tools like the Force Concept Inventory in physics or the Graduate Record Exams provide ways of comparing among institutions. When we implement new teaching methods, we often look to these tools to assess impact, comparing the results from year to year. We would like to know, in a detailed way, whether students are doing better or worse than they have in the past. The students this term are not, of course, the same students we had last year. Perhaps this years students are simply better (or worse) than last years. Not knowing this seriously undermines our ability to assess how were doing. Occasionally, when a class is really large, one can reasonably assume the range of student ability is similar from one term to another. But even then, it is impossible to know whether a measured improvement comes from better performance among the weakest students (bringing up the floor), the strongest students (raising the roof), or uniformly across the class. Understanding this is important, as diverse students may benefit from various teaching styles in different ways. To learn something more about how well students are doing, we need to know how well we expected them to do. Our tool for predicting performance is their prior individual history.