Peer Observation of Teaching: Best Practices

Peer Observation of Teaching: Best Practices

Table of Contents

According to its mission statement, the primary goals of Iowa State University are to “create, share and apply knowledge to make Iowa and the world a better place.” According to the ISU Faculty Handbook, Section 5.2.2.3.1. Scholarly Teaching, “Most faculty have significant teaching responsibilities, and the quality of their teaching is a major factor in evaluating their accomplishments and performances.”

Background and resources

Faculty members may use a portfolio format to document their teaching activities, including teaching philosophy, student ratings of teaching, teaching materials and forms of assessment, peer evaluations based on classroom observations and review of teaching materials, and evidence of student learning. There are multiple forms of standardized peer review for research documentation, including submitting work for presentation, publication, and grants. For teaching, there are fewer standardized and recognized means in which to be observed and evaluated. Given this reality, it is necessary to develop best practices for peer observation that may be individually yet, broadly adopted by colleges and units in the University.

Based on an initial document created by a CELT Advisory Board subcommittee in spring 2009, the following report was approved by the CELT Advisory Board in spring 2017.

Example evaluation templates

Quality Matters (QM) review processes for online/hybrid courses

Quality Matters (QM) is a faculty-centered, peer review process based on best practices to measure online and hybrid courses’ quality.

QM self-review tool

Course self-reviews engage instructors in deliberate thinking about the critical components that make their online courses effective. A Quality Matters Self-Review tool uses the QM Rubric to help instructors reflect on their plans around the course-specific logistics, learning objectives, assessments, instructional materials, learner activities, interaction, course technology, learner support, and accessibility and usability. 

One step for observed instructors to contextualize their courses and discuss potential areas they would like input from observers by using the self-guided reviews before the pre-observation meetings.

ISU’s instructors can create a free QM account to activate the semi-automated Self-Review tool inside the QM website under “Course Review Management Systems” in the top navigation bar. Once completed, a pdf version of the self-review can be exported and shared with peer observers.

Quality Matters as a collegial peer review process

Upon request, CELT can perform an internal review of online course design through a consultation revolving around specific courses and pedagogical concerns instructors might have about their course designs before their teaching is observed by teaching peer reviewers. The internal review uses the QM self-review tool performed by the CELT experts trained in the QM review processes and the QM underlying principles of collegiality, collaboration, continuous course improvement, and research-based commitment practices. Such reviews focus on course design only and are formative by nature. Not under review are the actual delivery of online courses and student-to-student and student-to-instructor actual interactions. Sharing the review results with teaching peer reviewers is at the discretion of observed instructors. 

Part One: Formative and Summative Observations

The purpose of formative observations is to provide advice to help a practitioner improve. Formative evaluations generally occur in the context of a relationship with a mentor or with an independent expert at an organization like CELT. To be fully effective, formative evaluations should be confidential and should remain the instructor’s property being observed. This allows an instructor the freedom to try new approaches and techniques without fear of penalty.

Summative evaluations, in contrast, are not confidential and are usually performed for use in personnel decisions such as contract renewals, promotions, and the granting of teaching awards. Formative evaluations assess an instructor’s teaching on its own terms. Summative evaluations add a comparative dimension, placing the individual teacher’s performance in explicit relation to their colleagues’ performance. The importance of this distinction is widely acknowledged in the literature. 2 These two forms of observation should be practiced in conjunction with one another. Between summative evaluations, an instructor should have the opportunity to use formative observations to hone their teaching skills.

At the same time, however, there is a consensus in the literature that each type of evaluation’s impartiality and effectiveness depend on its separation from the other. The institutional framework of ISU already supports this distinction. Through the Teaching Partners Program, CELT and trained administrators can provide instructors with ongoing, formative observation; the departmental teaching mentors assigned to junior faculty can fulfill a similar function. Summative evaluation, in contrast, should be performed by other faculty members in an instructor’s own department or in another department that is closely related. Where possible, these colleagues should be of higher rank than the faculty member being evaluated.

Part Two: Best Practices

Designing a system of peer observation of teaching should consider the following:

  • A discipline-specific discussion of what effective teaching entails, either among the evaluators or in the unit as a whole. Such a discussion should yield an observation document that the evaluators can use to structure their judgments. See the examples provided below.
  • For both formative and summative observations of teaching, the Reviewer and Reviewee should conduct a pre-and post- observation meeting, either via email or in-person. The pre-observation meeting is crucial to providing contextual information about the course, the students, and the instructor. The post-observation meeting, best via person, enables the observed and observer the opportunity to discuss the class session.
  • Acknowledgment of the distinction between formative and summative observation. According to best practices identified in the literature,
    • Different people should perform formative and summative observations.
    • Formative observations can be conducted by a person chosen by the instructor being observed. Summative evaluators should be elected or appointed.
    • Summative evaluators should be colleagues of equal or greater rank in a department or discipline the same as or similar to that of the teacher being evaluated.
    • To ensure sufficient reliability, a summative evaluation should be the collaborative product of at least two evaluators’ committees.
    • To be fully effective, summative evaluation should not occur on its own. It should instead alternate with an ongoing formative evaluation program provided both by faculty mentors and CELT staff members.
    • Formative and summative evaluations should occur at prescribed intervals that the evaluee knows in advance, most likely as part of mandatory reviews for contract renewal, review for tenure, and post-tenure reviews.
    • Assistant professors with teaching appointments should ideally have at least three observations before promotion and tenure, with one of them occurring before reappointment.
    • The Reviewer should conduct each of the reviews in a separate academic year.
  • The observation and evaluation period for Associate Professors should be aligned with a post-tenure review with a minimum of two observations before promotion to full Professor.
  • Peer review of Professors should be aligned with the post-tenure review.
  • Development and approval of a form for peer observations of teaching.
  • The written assessment of class observations is discussed with the instructor by the evaluator. The written assessment is signed by the evaluator and instructor and submitted to the department head with a copy to the instructor.

Part Three: How to Begin Discussion

Without an initial period of reflection about what good teaching involves and what specific instructional objectives a given unit wishes to achieve, the evaluators’ conclusions may be unreliable in ways that impair their usefulness. A document to structure the evaluators’ assessment is a straightforward and efficient way to provide the solid intellectual foundation any effective system of peer observation of teaching requires.

The department or unit wishing to create a peer observation of the teaching system could begin with questions such as What aspects of teaching will we observe? From this brainstormed list, categories are developed and prioritized. It is recommended that the group identify only four to eight main categories of teaching performance. Once these categories are identified, the group might comment upon, “What questions might we ask about the performance within this aspect of teaching?” For example, if the category is “instructor organization,” observers might be asked to comment on such questions as: “did the teacher arrive on time?” “Was the class setting prepared and appropriate for the day’s activities?” These items then form the subtasks itemized within the categories. The questions should focus on those aspects of instructional performance that the group thinks are most critical to learning and observed.

Once the list of categories, questions, and guiding questions are generated, the group needs to decide the type of observation protocol form to be used – checklist, narrative, scaled rubric, or another format. The form should be piloted by peers and then revised for use. A policy document that details how the form should be used (formatively, summative, how often) should accompany the form.3

The following characteristics of effective teaching have emerged within the literature and could potentially serve as the basis for a more specific, discipline-tailored rubric. The list is divided into seven categories, each representing one aspect of a teacher’s responsibilities, broadly conceived. Examples of these categories with subtasks in checklist, narrative, and scaled rubric form appear below.

  • Instructor preparation and organization
  • Instructional strategies
  • Content knowledge
  • Presentation skills
  • Rapport with students
  • Classroom management
  • Clarity
 

Notes

1 “Mission and Vision,” http://www.president.iastate.edu/mission, accessed April 5, 2017.

2 See, e.g., Ronald R. Cavanagh, “Formative and Summative Evaluation in the Faculty Peer Review of Teaching,” in Innovative Higher Education 20:4 (1996): 235-240; John A. Centra, Reflective Faculty Evaluation: Enhancing Teaching and Determining Faculty Effectiveness (San Francisco: Jossey Bass, 1993); John A. Centra, “Evaluating the Teaching Portfolio: A Role for Colleagues,” in New Directions for Teaching and Learning 83 (Fall 2000): 87-93; Chism; Hutchings, ed.; Trav D. Johnson and Katherine E. Ryan, A Comprehensive Approach to the Evaluation of College Teaching, in New Directions for Teaching and Learning 83 (Fall 2000): 109-123.

3 Nancy Van Note Chism, Peer Review of Teaching: A Sourcebook, 2nd Ed. (Bolton, MA: Anker, 2007)