Friday, November 19, 2010

The Safe Handling of Value-Added

As with all fire arms, safe handling is essential. Poorly designed and in the wrong hands, value-added assessment is a shotgun. It may hit what its aiming at, but its likely to hit some other things too. But used within strict limits, we may be reasonably assured of it usefulness.

Like writing portfolios, value-added assessment does a pretty good job at the extremes. In the same way that portfolios could separate the distinguished writer from the novice while poorly differentiating performance among the apprentices, value-added can identify the great teachers and the terrible teachers but can't tell high average from low avarage. It should not be relied upon for important decisions among those scoring in the average range.

A balanced look at evaluating teachers through value-added assessment was offered by the Brookings Institute in a new report this week.

As a principal, I always wanted as much data about the performance of the school as I could get. The presence of reliable data is very useful for improving the quality of the decisions that must be made. I wanted accurate information about everything from monthly budget reports to the time individual buses were arriving at the school. But I did not want to beat anyone over the head with the information. I wanted to solve problems and make the school better.

Similarly, I wanted as much information as I could get about the performance of our students and teachers. I wanted to make sure we were giving our students everything they needed to be successful. Thus, I am naturally attracted to interim testing data and the concept of value-added assessment. But to be of use, all data counted upon for high-stakes decisions must be reliable. Otherwise, it stands a good chance of becoming counterproductive.

How value-added systems are constructed and how they are used matters.

The Brookings folks argue that the use of value-added data to predict future teacher performance is consistent with predictive data used in other fields.

Over at Education Week's Teacher Beat blog Stephen Sawchuk sees the findings as adding a contrasting view in the debate over value-added assessments, which have been criticized as an incomplete way to evaluate teachers.

While an imperfect measure of teacher effectiveness, the correlation of year-to-year value-added estimates of teacher effectiveness is similar to predictive measures for informing high-stakes decisions in other fields, like the SAT test, Brookings says.

The evaluation of teachers based on the contribution they make to the learning of their students, value-added, is an increasingly popular but controversial education reform policy. The report attempts to clarify four areas of confusion about value-added.

The first is between value-added information and the uses to which it can be put. One can, for example, be in favor of an evaluation system that includes value-added information without endorsing the release to the public of value-added data on individual teachers.

The second is between the consequences for teachers vs. those for students of classifying and misclassifying teachers as effective or ineffective — the interests of students are not always perfectly congruent with those of teachers.

The third is between the reliability of value-added measures of teacher performance and the standards for evaluations in other fields — value-added scores for individual teachers turn out to be about as reliable as performance assessments used elsewhere for high stakes decisions.

The fourth is between the reliability of teacher evaluation systems that include value-added vs. those that do not — ignoring value-added typically lowers the reliability of personnel decisions about teachers.

We conclude that value-added data has an important role to play in teacher evaluation systems, but that there is much to be learned about how best to use value-added information in human resource decisions...

Critics of value-added methods have raised concerns about the statistical validity, reliability, and corruptibility of value-added measures. We believe the correct response to these concerns is to improve value-added measures continually and to use them wisely, not to discard or ignore the data...

No comments: