Skip directly to content

Zombie Policy: Using MAP to Evaluate Teachers (You've Been Warned)!

In 2010, Charleston was considering a policy that would base 60% of a teacher's evaluation on MAP scores.

NWEA, the organization that sells the MAP test, warned them that it was a bad idea, and the idea was dropped, dead in its tracks.

 

Zombie policy: why let a bad idea rot in peace? 

Seven years later, it's back!

Now, MAP scores are being used all by themselves to judge Charleston County School District teachers, threaten their jobs, and place them on performance plans.  Principals who object are being pushed out of their schools.

 

Hey, CCSD did learn something.  This time they weren't foolish enough to have a public process and put it to a board vote. 

Scroll down to read THE FULL LETTER FROM NWEA:

 

Charleston County School District Teacher Evaluation Policy


The Kingsbury Center at NWEA is an independent center within the Northwest Evaluation Association,
created to conduct research on trends in educational policy and educational assessment. Through
collaborative research studies with foundations, think‐tanks, universities and NWEA partner schools, the
Kingsbury Center is helping to change the conversations around education’s most challenging issues.
The Center and our partners strive to influence the thinking of leaders at all levels of the educational
system. Our work ranges from research that influences national policy to reports that provide actionable
information to school systems.


Recently, it came to our attention that a database was published by the Charleston Post and Courier
that reported on the academic growth of students taught by Charleston County School District teachers.
In this database, CCSD teachers were ranked on the basis of the percentages of their students whose
growth matched or exceeded the growth of Virtual Comparison Groups of students that were created by
the Kingsbury Center for the district. As a consequence of that publication, the CCSD school board is
now considering implementation of a teacher accountability policy in which 60% of a teacher’s
evaluation would be dependent on student growth and achievement data of this type. When we
learned of this, we asked the district administration for permission to send our comments on the
proposed policy to the board, and we were encouraged to do so.


The Kingsbury Center supports efforts to implement accountability for both schools and teachers. We
believe that Charleston students deserve no less. We also believe that student achievement data can
inform the teacher evaluation process. But nearly all experts in our field agree that test results should
not supersede the principal’s role in teacher evaluation
. There are several reasons for this:


1. The proposed board policy establishes an expectation that each (we take this to mean each and
every) student advance by no less than one academic year. This could be interpreted to mean that
any situation in which one or more students failed to meet this objective would constitute cause for
a personnel action against a teacher. We doubt that is the board’s intent. The board should be
aware that, according to our most recent norms, the top 10% of schools for growth only have 64%
to 73% of their students (depending on grade level) meet the “one year of growth” target.


2. The statistical methods used for these kinds of evaluations, known as “value‐added” models, are
useful for evaluating schools and can play a role in the professional evaluation process for teachers.
However, these statistical models are designed under a specific set of assumptions about schools,
which, when not met, limit the validity of their findings. For example, the accuracy of a value‐added
measure can be compromised if students and teachers are not randomly assigned to classrooms.
For example, if young teachers were routinely assigned to the most difficult classes; if veteran
teachers make their own choice of teaching assignment and/or students; or if certain advanced
classes were reserved exclusively for students without behavior problems; all of these scenarios
potentially introduce bias into value‐added measurements. Such bias, when it exists, can
mistakenly attribute higher “effect” ratings to some teachers and lower “effect” ratings to others,
leading to invalid results.


3. As the stakes associated with value‐added measurement increase, the legal requirements around its
application will grow considerably stricter. Because South Carolina requires due process for
experienced teachers that are proposed for termination, any evaluation policy should be written to minimize the risk of a challenge stemming from the use of test results in the process. For instance, a
teacher who is terminated or placed on probation because of test scores would have cause to
challenge the action if it can be proved that the procedures for assigning students to classes
introduced the kinds of biases cited in the previous point.


4. Measurement error should be considered and applied if test data are being used for performance
evaluation. Studies of the Educational Value‐Added Assessment System (EVAAS) in use in Tennessee
since 1993 found that only one‐third of teachers could be identified as clearly different from
“average” when measurement error was considered.


5. The statistical errors associated with value‐added measures decrease dramatically as you consider
larger numbers of students. For example, the statistical error associated with a classroom of 30
students in Charleston would be about 3.4 times greater than the error associated with the average
Charleston school. If the class size is reduced to 20 students, the statistical error would be 4.2 times
greater. This means that classroom results are likely to be far more volatile than school results over
time.


6. School districts implementing value‐added systems typically rely on student achievement data in
reading and mathematics. The proposed board policy requires that assessments to measure
academic growth be implemented in other core subjects. Measuring growth in some of the core
subjects, particularly history and social science, has so far not been attempted by test publishers and
the validity of value‐added methodologies applied to these subjects is unproven. Implementing this
methodology with the level of stakes proposed, may invite legal challenges to personnel actions
taken on the basis of poor results. The problem becomes profound at the high school level, because
of the myriad number of subjects taught. In addition, high schools cannot generally identify any
single teacher who would be responsible for a reading or mathematics score. This is why value added
methodologies are rarely applied in high school settings.


Recommendations
We would recommend that the board refrain from adopting a policy in this area until the district is able
to study and propose concrete strategies for addressing these issues which, unaddressed, may expose
the district to potential legal liability:


a. To address the question of exactly what data would be used for evaluation of teachers who
are not teaching in subjects in which value‐added measurements are currently used,
particularly social studies, history, science, art, vocational education, and music.


b. To establish explicit criteria for performance on the tested measures that are validated as
reasonable by using data from test publisher norms, the district or state’s past performance,
or other legally defensible standards.


c. The policy should also require that statistical error be considered when applying these
criteria.


d. We would recommend that student test data not receive more weight than the principal’s
evaluation in importance.


e. The impact of measurement error, while relatively large when measuring individual student
growth, decreases dramatically when that growth is aggregated to large groups. When the
groups under consideration are several hundred, such as school level aggregations,
measurement error has a much more negligible impact. Consequently, we can support
using value‐added metrics as one factor among others in identifying under‐performing
schools.