How are Alabama’s teachers being evaluated?
Are those evaluations helping teachers get better at teaching?
Are students learning more as a result of those evaluations?
The first question is easy enough. The latter two are more confounding.
Identifying effective teachers who improve student learning is the subject of Sen. Del Marsh’s (R-Anniston) PREP (Preparing and Rewarding Education Professionals) Act, which barely cleared the Senate Education and Youth Affairs committee last week and is expected to be considered by the full Senate in the near future.
Since we published the draft of Marsh’s all-things-teacher-improvement-and-reform proposal last December, it has been the topic of conversations across Alabama.
The PREP Act covers a lot of ground, from mandating a portion of a teacher’s evaluation be based on student growth (measured by test scores) to creating a teacher advisory committee to work with state lawmakers on education reform initiatives.
Most agree that the supports, incentives, and recognition portions of the bill are good for students, teachers and schools. We’ll set that aside for the moment.
Even the change in the time it takes to earn tenure, increasing from three years to five years, doesn’t seem to be as big an issue as one might think. We’ll set that aside (mostly) for the moment, too.
But basing 25% of a teacher’s evaluation solely on the results of growth on tests from one year to the next is causing an uproar.
The PREP Act calls for using a “student growth model”, also known as a value-added measure, as a measure of student effectiveness and mandates that 25% of a teacher’s evaluation be based on the results of that student growth model.
Many, including Marsh, point to the Alabama State Department of Education’s (ALSDE) Educator Effectiveness model as having created that 25% measure in the first place.
Marsh held up the image below (taken directly from the ALSDE’s Educator Effectiveness model) during the Senate Education and Youth Affairs committee’s public hearing on Tuesday, asking the ALSDE’s Coordinator of Educator Effectiveness Mark Kirkemier to confirm the image depicted the ALSDE’s plan.
Kirkemier responded, “Yes, sir,” and then clarified that ACT Aspire is a part of the student growth piece, but not the full 25%.
The pie graph below represents the various components of the ALSDE’s new teacher evaluation process. The light pink color represents the proportion of the teacher evaluation that is based on “student growth data”.
So if the ALSDE’s plan already includes a component for student growth, and that amount is 25%, what’s the big deal?
It’s what Kirkemier said: in the ALSDE’s plan, student growth is not based solely on the ACT Aspire. In fact, the ALSDE’s plan allows districts to determine which test data is “meaningful in determining student growth”.
It would likely include the ACT Aspire, but could contain other measures that teachers use to determine how much learning a student has gained during the course of a year. Those measures can differ greatly according to subject and grade.
According to ALSDE officials, giving teachers within a district the chance to debate, discern and ultimately determine which pieces of data matter most allows for thoughtful reflection on the importance and proper use of student test scores and gives teachers the time to buy in to using those test scores as a part of their evaluations.
About 50 of Alabama’s 136 school districts are in the process of implementing the new model.
The ALSDE has purposefully rolled out the Educator Effectiveness model slowly in order to allow districts to determine, based on the needs of their communities, what matters most for teachers to be doing, officials said. Because what’s important for teachers to focus on in Perry County may be different than Jefferson County, for example.
Supporters of the PREP Act are quick to point out that the remaining 75% of the evaluation can still be determined by the district, which matches up with what the ALSDE’s plan calls for.
Here’s a look at what the ALSDE’s plan requires compared with which pieces the PREP Act affects. The text in red indicates where the two plans diverge.
While the PREP Act only appears to affect one actual component of the evaluation, it’s what happens next that has teachers really ‘riled up.
The Procedural Application of the Evaluation
The procedural application of the evaluation is where opponents and supporters really disagree.
Here’s a look at procedural comparisons between the two plans.
The PREP Act uses teacher evaluations to create an overall effectiveness rating based on a five-point scale.
That effectiveness rating will then be used in personnel decisions (whether or not to grant a teacher tenure, and whether a teacher could lose her job during a reduction-in-force declaration).
The effectiveness ratings of all teachers within a school will be published in aggregate form (not by individual teachers’ names) on each school’s web site. See the image below for a look at the type of school-level data publicly reported in each state.
Using teacher effectiveness ratings in that way makes it a measure of external accountability.
In contrast, while the ALSDE’s plan also results in an effectiveness rating, that rating is used only by the teacher and the principal to identify strengths and weaknesses and to then create a professional development plan.
That is a measure of internal accountability.
Importantly, the ALSDE’s plan doesn’t mandate a single effectiveness rating. Rather, it allows districts to determine whether teachers will receive one overall rating or multiple ratings, based on a four-point scale.
In other words, the PREP Act’s primary purpose of teacher effectiveness ratings is for external accountability to the public, where the ALSDE’s plan’s primary purpose for those ratings is internal accountability and reflection to help teachers improve their craft.
The PREP Act goes further to require districts use effectiveness ratings to earn tenure (for hires after January 1, 2017) and to determine which teachers are laid off in a “reduction in force” situation.
That requirement makes these ratings “high stakes” and changes the role of the evaluation, according to opponents of the bill, which undermines the very use of the evaluation for individual improvement.
If teachers are worried about landing at the low end of the scale, they may feel more competitive with teachers in their own school and thus less likely to collaborate to improve learning for students other than their own, opponents say.
Supporters say competitiveness is healthy and will result in improved student learning.
While research is mixed regarding the accuracy of teacher effectiveness measures, no research could be found substantiating whether publicizing effectiveness ratings actually improves student learning in schools.
A number of states now publicize those ratings, and here are a few of those:
What Happens Next?
Resistance is growing to the teacher evaluation portion of the PREP Act, and whether lawmakers will ultimately make it the law of Alabama has yet to be seen.
NOTE: The first chart was updated March 14, 12 noon, to reflect that the PREP Act mandates only that ACT Aspire be one of the pieces of student data considered in student growth.