The long-awaited teacher evaluation overhaul bill was filed by Sen. Del Marsh (R-Anniston) on Tuesday. Called the PREP Act (“Preparing and Rewarding Educational Professionals”), it changes everything about the way Alabama’s 43,000 K-12 classroom teachers, assistant principals, and principals are evaluated.
The PREP Act also mandates the use of evaluations in creating a teacher effectiveness rating based on annual evaluations, which will be shared in aggregate form (no individual teacher ratings by name) on school web sites.
Besides mandating annual evaluations for those education employees, it also:
- Creates incentives for teachers to teach in hard-to-staff areas and schools (a one-time $2,500 bonus, plus an additional $1,000 if a teacher is still teaching in that job for a fourth year), called the Alabama Teacher Recruitment Fund (with a $5 million appropriation for FY17)
- Creates a mentoring program for first-year teachers (providing $1,000 for teachers agreeing to serve as mentors), called the Alabama Teacher Mentor Program (with a $3 million appropriation for FY17)
- Creates a Teacher Advisement Committee consisting of, among others, nine K-12 teachers and two K-12 principals to advise legislators about education policy, and
- Increases the number of years to obtain tenure to five, and a teacher must have favorable effectiveness ratings in three consecutive years before obtaining tenure, effective for teachers hired after January 1, 2017.
- Funds the Legislative Performance Recognition Program, originally created in 2012, but not funded. Appropriates $10 million for school districts that have achievement results in the top 10% in the state or improve their letter grade (from the A-F grading system not yet implemented) by one grade.
Reaction from the education community was immediate. And it wasn’t good.
While that draft (technically the second draft, though the first draft went unpublished) contained multiple and varied pieces (including creation of a longitudinal data system that has since been pulled out and introduced as a separate bill), the biggest controversy centered around using student test scores in teacher evaluations and tying teacher pay to those evaluations.
That draft mandated various components to be a part of a teacher’s evaluation and what percentage of a teacher’s final evaluation those components must be. It was detailed, and it was clear.
The bill then disappeared for a few weeks.
In the official version filed on Tuesday, in addition to the items bullet-pointed above, 25% of a teacher, assistant principal or principal evaluation must be based on student growth.
The other 75% of the evaluation is up to the district, but must include two observations and the results of student surveys (for grades 3 through 12).
And those evaluations must then translate into teacher effectiveness ratings. Five levels are prescribed, from “significantly below expectations” to “significantly exceeds expectations”.
In a prepared statement, Marsh said, “This bill shows a commitment to the education professionals in Alabama. Over the past year we have worked with anyone who would be impacted by this bill, including classroom teachers themselves, and I believe this piece of legislation reflects input from all those involved.”
What Teachers Are Saying
As mentioned previously, Alabama’s educator community reacted strongly when the first draft was published. Two of Alabama’s Teachers of the Year took to various airwaves to voice their concerns.
2015 Teacher of the Year Ann Marie Corgill appeared on the Matt Murphy show in early February to share her concerns. Murphy brokered a meeting between Corgill and Marsh. Corgill expressed her gratitude to Marsh for spending time to discuss the bill on a second appearance on Murphy’s show.
— Del Marsh (@SenatorDelMarsh) February 10, 2016
After the meeting, a new draft was prepared, and the name changed to the PREP Act.
After seeing the draft, Corgill wrote to Marsh about the concerns she still had, and gave permission to publish the full letter here. Though a few more things changed before the final bill was filed, Corgill said earlier today that her basic concerns remain.
In part, Corgill said the PREP Act represented “a flawed and misinformed view of what constitutes teacher and administrator effectiveness. It also assumes that measuring growth in achievement of students and rating teacher effectiveness based on narrow measures of achievement will, in turn, prepare teachers and enhance their effectiveness and expertise. Student growth models are valuable for looking at a range of factors that may influence student growth over time, but this type of evaluation model to show teacher effectiveness is highly unstable. Teacher ratings are significantly affected by the differences in the students assigned to them.”
She also requested that Marsh not use her name in conjunction with the PREP Act. No one has responded to her letter.
Jennifer Brown, current Alabama Teacher of the Year, has also been vocal about her concerns about the PREP Act. Brown is active on Twitter and is encouraging others to use the hashtag “#noSB316” to indicate their concern with the PREP Act. SB316 is the bill’s official identity in the legislature.
— Jennifer Brown (@jbrownaps) March 2, 2016
Brown contends VAMs are flawed and teacher effectiveness ratings do not encourage collaboration among teachers. Brown has started an innovative campaign to get state lawmakers into classrooms to let them see for themselves the great things happening in Alabama’s schools.
She spoke with ABC33/40 on Tuesday afternoon about her concerns. From the interview:
“Holding teachers accountable for a test score is not the answer,” Brown said. “The answer is fully funding education. The answer is finding what works in certain school systems and modeling that in other school systems. The answer is actually letting school systems create their own accountability systems. I believe an accountability system created by educators for educators is much more effective than one created by legislators.”
Value-Added Measures, a.k.a. Student Growth
Value-added measures (VAMs) purportedly measure how much growth a teacher contributed to a student’s test results from one point to the next. Seems simple enough.
Instead of being simple, it’s an extremely complicated mathematical model that even researchers are unsure about its validity and whether it accurately measures a single teacher’s impact on a student’s test scores.
VAMs gained the attention of state departments of education in late 2009, when Race to the Top (RTTT) grant money was made available to states who agreed to use new ways to measure effectiveness and improve teacher evaluations. Alabama did not win approval for their application for RTTT grant money.
In a 2014 statement, the American Statistical Association (ASA) made the following recommendations regarding the use of VAMs:
- The ASA endorses wise use of data, statistical models, and designed experiments for improving the quality of education.
- VAMs are complex statistical models, and high-level statistical expertise is needed to develop the models and interpret their results.
- Estimates from VAMs should always be accompanied by measures of precision and a discussion of the assumptions and possible limitations of the model. These limitations are particularly relevant if VAMs are used for high-stakes purposes.
- VAMs are generally based on standardized test scores, and do not directly measure potential teacher contributions toward other student outcomes.
- VAMs typically measure correlation, not causation: Effects – positive or negative – attributed to a teacher may actually be caused by other factors that are not captured in the model.
- Under some conditions, VAM scores and rankings can change substantially when a different model or test is used, and a thorough analysis should be undertaken to evaluate the sensitivity of estimates to different models.
- VAMs should be viewed within the context of quality improvement, which distinguishes aspects of quality that can be attributed to the system from those that can be attributed to individual teachers, teacher preparation programs, or schools.
This may be the most important piece of information about VAMs in their statement:
Most VAM studies find that teachers account for about 1% to 14% of the variability in test scores, and that the majority of opportunities for quality improvement are found in the system-level conditions. Ranking teachers by their VAM scores can have unintended consequences that reduce quality.
In January, the Institute of Education Sciences (IES), a research arm of the U.S. Department of Education, published results of a study that reviewed multiple years of growth for 370 elementary students and warned states that the measured connection between student test score growth and how the teacher impacted that growth did not meet “the level of reliability traditionally desired in scores used for high-stakes decisions about individuals”.
What Alabama Is Already Doing Around Teacher Evaluation and Teacher Effectiveness
Currently, Alabama’s teachers are evaluated using EDUCATEAlabama, which is largely based on a teacher’s self-evaluation and goal-setting in collaboration with an administrator or other supervisor of the evaluation process. The evaluative components are based on the Alabama Quality Teaching Standards.
However, the ALSDE’s office of teacher effectiveness has been working to transform teacher evaluations in the last couple of years.
In late October, Alabama received approval from the U.S. Department of Education of its “State Plan to Ensure Equitable Access to Excellent Teachers”. That plan, required for all states receiving Title I, Part A funds (tied to students in poverty), was tied in part to Alabama’s waiver from No Child Left Behind requirements, which have now changed due to the passage of the Every Student Succeeds Act (ESSA). ESSA backed off of the requirement that student growth be a significant component of teacher evaluations, but it’s unclear what exactly the final rules and expectations will be.
It’s important to note, though, that Alabama’s teacher equity plan, and Plan 2020, both state that student growth will be used as a component of teacher evaluations. Here’s page 19 of Plan2020 with that section boxed in red.
What that looks like is still being developed, but in systems that are piloting new teacher evaluation systems, student growth data counts for 25% of a teacher’s evaluation, which is the same percentage the PREP Act proposes.
However, the ALSDE’s evaluation system stops short of establishing teacher effectiveness ratings based on those evaluations.
SB316 calls for teacher evaluations to result in teachers being rated among one of five levels ranging from “significantly below expectations” to “significantly exceeds expectations”. Those ratings determine whether a new teacher gains tenure, and if ratings are low for two years, it allows for a teacher’s tenure to be revoked.
Teachers with the lowest levels of ratings will be first to be considered for termination if school districts declare a “reduction in force” when they need to lay off teachers.
Though some of us have been following this bill since it first hit the email rounds in December, it only began its official journey on Tuesday.
Marsh’s bill is expected to be considered in the Senate’s Education and Youth Affairs committee as early as next week.
[Even at 1,800 words, this bill cannot adequately be explained in one article. Stay tuned for more coverage as the bill makes its way through the legislative process.]