This morning KSN&C posted an item about the Dallas Independent School District's use of an index to describe - not the children who attend a particular school - but whether the performance of those students meets, exceeds or falls below what could reasonably expected based on the demographics of the student body.
Now some folks won't like this idea. Those who feel it's wrong to suggest out loud that some students may never achieve proficiency (due to poverty, neglect, abuse...) may cringe at the notion. But a school's "success" under the present system is impacted by the students who attend. The trick is to fairly adjust for the differences in student population and measure what the school brings to the students; the value the school adds.
Is it OK for a "good school" to cruise along while gains are made by talented students in spite of their teachers?
Is it OK to sanction a low performing school that is outperforming expectation?
With those ideas in mind, the folks at the Center for Educational Research in Appalachia recently conducted a regression analysis to determine the strength of the relationship between poverty and the 2007 district-level CATS index. Here's what they came up with:
Using free lunch participation as a proxy for poverty, a regression output was used to calculate a "predicted score" for each district. Then, the difference between the district's predicted score and the district's actual score was calculated.
The difference allows a peek at those districts that are performing as expected - as well as those districts that are "over-performing" or "under-performing" relative to the socio-economic status of the student population.
Finally the districts were ranked by their differences into five equal quintiles of 35 school districts each - and placed on a color coded map of Kentucky.
Districts with differences of -1.7 to + .5 were considered to be performing about as predicted and are colored yellow. Fully acceptable; not great.
Differences from +.5 to +5.3 are light green. Very Good.
Districts exceeding their predicted score by + 5.4 to a whopping + 16.3 are dark green. Terrific.
On the other hand, districts that failed to meet their predicted scores are orange (-.5 to -1.7) and Red (-5.0 to -14.0). These represent the coasters and low performers.
The MacDaddys of the process are those high-performing districts that also exceed their predicted outcomes (The heavily supported Ft Thomas, Anchorage...)
The value-added idea would expand this assessment to include a number of other factors that would be tracked in a stable system over time. Dropouts? Graduation rate? Attendance? ...whatever Kentuckians will accept as reasonable measures of school performance.
No comments:
Post a Comment