BECOME A MEMBER! Sign up for TIE services now and start your international school career
THE PRINCIPALS' TRAINING CENTER
Teacher Effectiveness and Student Learning: A Rant
Why do we still debate whether or not student learning results matter in assessing a teacher’s effectiveness? By Bambi Betts 05-Apr-17
Just back from an another international education conference where international school heads came together to once again attempt to unpack the sticky issue of teacher evaluation. And I just don’t get it. Did I miss something? How is it possible that we still debate whether or not student learning results should be included as pivotal data in determining the effectiveness of a teacher? How is it possible that a profession would even remotely consider the idea that its bottom line (learning) would not factor in when examining the most essential ingredient (teachers) of its success?
We either accept the research of our own profession or we don’t—I see no middle ground. Every single study conducted that I have managed to get my hands on says the same thing: in the school context, the quality of the teaching is the single biggest determinant for learning. Yet still today we have teachers and principals who are outraged that we would even think of looking at learning results when it comes to evaluating (or even just supervising) our skillful, paid professionals. This is a profession, not a job. Professions have standards for their practitioners and those practitioners are held accountable to them. Moreover, standards get raised as the profession’s own research brings new understandings to light.
Our standards in the education profession have been raised. All those research studies strongly indicate that teaching is the most critical factor in learning. They have measured the effect of teaching on learning, not on how teachers behave professionally, or who they “collaborate,” with or how they plan units. We know unequivocally that what teachers do matters, and mattering can only be revealed through examining the learning that each teacher’s kids achieve.
The major argument against using learning results as even a small part of a teacher’s evaluation data is well known: there are too many factors that affect learning over which the teacher has no control. It would just not be “fair” to draw any conclusions about a teacher’s effectiveness and therefore professional next steps based on what the kids have learned. Better to just look at all the teacher’s “inputs” and assume that if he is hitting all the marks, then regardless of learning results, we will stamp him “effective.”
What kind of reasoning is that? We can’t have it both ways. The research is conclusive: the teacher is the most significant factor in learning. So it is completely fair and logical to use learning results as the primary indicator of teacher effectiveness. And shall I go out on a limb and say that it may actually border on unethical not to do so??If, as many educators would like to claim, the learning results are too “contaminated” by other inputs, rendering the teacher a small part of the learning of any given kid (NOT!), then why set up a system where teachers are the centerpiece of the school? Let’s disregard the research, dismantle that practice, and turn our full attention to those “contaminating” inputs that are causing learning results to be such unreliable indicators of teacher effectiveness. Shall I rant on?
Of course it is important to collect evidence of what the teacher is doing, because those are the things that can be modified if the learning results are not what they should be. And of course the tools used to collect evidence of learning need to be valid and reliable (and who makes those—yes, teachers). But relying solely on examining instructional and professional behaviors that should lead to learning without looking at the learning results in tandem is somewhere on the continuum of completely stupid to downright damaging. No wonder teacher growth, appraisal, evaluation, and supervision schemes—the whole lot, whatever you call them—have never worked and still don’t. We are looking in the wrong places, driven by faulty assumptions. In the words of the late Pete Seeger, “When will we ever learn?”
Please fill out the form below if you would like to post a comment on this article:
05/04/2017 - Bronco
Dear Ms. Betts:
It is hard arguing against the "conclusion" of studies in relation to teacher effectiveness and student learning. It is my hope that most teachers and self-respecting professionals would not disagree with those conclusion.
The issue arise when only parts / aspects of these studies are partially implemented. I tend to call it the independent contractor syndrome / environment within schools and districts, where teachers are responsible for every aspect of student learning, despite schools and districts not providing critical elements.
For example, I have not worked for one school where I have had the privilege of utilizing complete materials (not to mention current) as well as assessment and achievement models within a complete curricula framework.
It is worth mentioning that the hearts of hard-working professionals, regardless of position, were in the right place, but the outcomes did not necessarily reflect the theories that are so easily thrown about. Let alone the meaning of integrating technology, which often refers to the use of smart boards, arguably merely a tool that substitutes the use of a blackboard.
The end result is that most professionals fear for their jobs in some aspect or another leading to time-consuming practice of finger-pointing. The level of innovation and entrepreneurship is relatively low within schools and consultants are used to take the heat for the success rate behind initiatives.
04/11/2017 - J.T. in Q
When will they ever learn, indeed! Learn what? How reliable are the measures of learning, particularly across varied grade-levels and subjects? The rant finesses the point about assessment reliability. We hesitate to rely on student performance for faculty evaluation because the assessments are themselves questionable, and numeric measures often imply a level of precision that is unwarranted. A mix of objective and subjective evaluation criteria balances the scale.