ABSTRACT: Top five flashpoints in the assessment of teaching effectiveness
Despite thousands of publications over the past 90 years on the assessment of teaching effectiveness, there is still confusion, misunderstanding, and hand-to-hand combat on several topics that seem to pop up over and over again on listservs, blogs, articles, books, and medical education/teaching conference programs. If you are measuring teaching performance in face-to-face, blended/hybrid, or online courses, then you are probably struggling with one or more of these topics or flashpoints.
To decrease the popping and struggling by providing a state-of-the-art update of research and practices and a “consumer’s guide to trouble-shooting these flashpoints.”
Five flashpoints are defined, the salient issues and research described, and, finally, specific, concrete recommendations for moving forward are proffered. Those flashpoints are: (1) student ratings vs. multiple sources of evidence; (2) sources of evidence vs. decisions: which come first?’ (3) quality of “home-grown” rating scales vs. commercially-developed scales; (4) paper-and-pencil vs. online scale administration; and (5) standardized vs. unstandardized online scale administrations. The first three relate to the sources of evidence chosen and the last two pertain to online administration issues.
Many medical schools/colleges and higher education in general fall far short of their potential and the available technology to comprehensively assess teaching effectiveness. Specific recommendations were given to improve the quality and variety of the sources of evidence used for formative and summative decisions and their administration procedures.
Multiple sources of evidence collected through online administration, when possible, can furnish a solid foundation from which to infer teaching effectiveness and contribute to fair and equitable decisions about faculty contract renewal, merit pay, and promotion and tenure.