There’s a misconception developing about the umbrella term “teacher quality” as the process of “teacher improvement and evaluation” – that evaluation and improvement are the same process.
I see this happening in my day-to-day conversations with people about Edthena. I often say that we’re bringing observation and feedback online for teacher improvement. The most common next question is, “Oh, so you’re using video to evaluate teachers?”
While this might loosely align with a definition of the word evaluate, it doesn’t match the on-the-ground implication of evaluating a teacher.
From my experience as an educator and that of most every other educator I meet, it seems obvious that the process of evaluating a teacher is inherently not the same as providing coaching and support. So what, then, might be contributing to this alternative understanding of the K-12 public education system?
This misunderstanding is likely rooted in the general language we use to describe teacher quality initiatives in conversation and media. At some point, we started using the less politically-charged and more generic label of “teacher quality” in place of the more specific but controversial label of “teacher evaluation” to describe specific efforts and initiatives.
The impact of the phrase-switching is that we conflated these concepts in the minds of those learning about the topic.
We need to define that the push for teacher quality is, in fact, the push for the dual efforts of teacher evaluation and teacher improvement, and we need discussion which addresses these processes as separate and operating in conjunction with each other and in parallel.1
A structural division
In both our current policies and implementation, the processes of teacher evaluation and teacher improvement are often very separate.
Take for example Florida, a state which recently overhauled its teacher evaluation system. A quick glance at the educator section of their website makes it clear that they separate teacher development and teacher evaluation conceptually. And digging into the info a bit more reveals that these two activities are defined by separate statutes.
Because both of these activities are separately defined, it also means they’re separately funded. Different people and policies drive the purchase decisions of teacher evaluation tools and teacher professional development tools. This also means different funding streams are available for each type of purchase.
So when we hear that billions is spent on teacher improvement, those dollars are not being spent on teacher evaluation. And when we hear of grants to incentivize development of new teacher evaluation systems, those dollars are not incentivizing teacher improvement activities.
A practical division
More important than policy and funding, let’s think of this separation from the vantage point of a teacher. For a teacher – no matter how positive or growth oriented – a high-stakes evaluation process is inherently threatening because one’s job is on the line.
It also stands to reason that when teachers respond to national surveys saying they want more professional development, they’re not saying they want more high-stakes evaluations. They want more opportunities to improve – opportunities where they can expose their weaknesses and get help without fear of a poor evaluation.
When it comes to teacher improvement, participation in the process is typically tracked and reported for the purposes of high-stakes evaluation (e.g. mandatory reporting about professional development hours). The data collected as part of the process – from formative assessments to classroom-based coaching – are not used for high-stakes evaluation.
A great example of creating non-evaluative opportunities for improvement is the data analysis workshops which are part of The Achievement Network. Because the student data on these assessments is (contractually) not allowed to be used for high-stakes evaluation, teachers are willing to analyze and uncover their students’ skill gaps in an open manner with colleagues.
Is it ever good news if the analysis indicates 80% of your students can not demonstrate a certain skill? No. But it’s not “you’re going to lose your job” news, either. It’s a professional development tool highlighting gaps in the teaching and opportunities for next steps.
Meeting need for safety vs need for achievement
This “way it feels to teachers” interpretation of the differences fits nicely with psychological theories of human motivation and helps clarify how the incentives are aligned in each process.
In evaluation, the incentivized best-outcome for a teacher is to keep one’s job and thus minimize weaknesses. During improvement, the incentivized best-outcome is to demonstrate growth and thus expose weaknesses along the way to get targeted feedback.
These motivation differences help solidify for me why evaluation and improvement activities are fundamentally different and must be treated as complementary activities rather than a singular process.
One of the things I hear most often when I talk to teachers is that they’re eager for more chances to work together, to learn from each other.
New teachers want regular access to colleagues with experience who can help them grow into the profession. Experienced teachers, likewise, want to become leaders in their schools by mentoring new teachers.
Be a teacher quality MythBuster
As we increase the stakes of teacher evaluations, we’re simultaneously increasing the need for authentic opportunities for teachers to improve. In as much as you’ll hold someone’s feet to the fire for actually improving, you’re going to need to provide them opportunities to get better.
All this seems straightforward, I know. But this is why I labeled this discussion one about misconceptions. It turns out that people will fill gaps in their understanding with the best available information and then those assumptions become the frame for interpreting information going forward, even if presented with the “correct” information at a later time.
I was introduced to the potential long-term impact of misconceptions as a second-year science teacher. I remember being shocked by the opening scene of the documentary A Private Universe as I looked in on commencement day at Harvard University.
Graduates were asked to explain the cause of Earth’s season, and nearly every one incorrectly explains seasons are caused by Earth’s distance from the Sun. This misconception is so strong that it overpowers the (likely) multiple attempts of teachers to teach the correct information.
Interestingly, the root cause for this common misconception can be traced to a common diagram illustrating Earth’s tilt toward or away from the Sun. The diagram reinforces a mistaken belief about varying distance from the Sun since Earth’s circular orbit is drawn using a perspective tilt which appears like an elliptical orbit of varying distance.
As a science teacher the message was clear: potentially misleading but factual information and the resulting misconceptions are dangerously strong influences to the way we process information over long periods of time.
My hope is that, after hearing about the misconception regarding the meaning of “teacher quality” and being reminded of the facts, those reading this will feel the same as I did as a science teacher. I felt the need to actively shift others’ understanding on the topic. We can’t let others think of teacher quality as a synonym for teacher evaluation.
If we don’t align fully on the broader meaning of teacher quality, we risk overlooking our dual responsibility to prioritize evaluation programs and improvement programs. We risk designing an education system that over prioritizes evaluation mechanisms without the proper supports to help teachers improve.
Having quality teachers requires more than just evaluating them on outcomes. Having quality teachers also means we’re helping them improve. And in order for teachers to improve, they must feel safe asking for help. Evaluation processes alone will not create this type of safety.
1Teacher quality is also the push for better teacher certification, but that’s a whole different discussion from on-the-ground operations in districts and schools.
Note: This commentary originally published in edSurge Feb. 5, 2013