An interesting post, and comments, over at This Week in Education on multiple measures.
It seems that US Rep and education committee Chairman George Miller's speech at the National Press Club last month struck a nerve with conservative Stanford researcher Erik Hanushek.
Miller had signalled his openness to looking at some additional measures of success, like high school graduation rate in assessing the success of our schools. This make perfectly good sense to me.
We ought to clearly define it so that "graduation" in Georgia means the same thing it does in Kentucky, then keep the data and track it.
I'm not a fan of Hanushek's. Some of his sworn testimony in school funding cases has appeared to me to be deliberately stupid. I say deliberately, because clearly, he is not, in fact, stupid. Sometimes, what he says is. To hear him tell it, money just doesn't matter much when it comes to quality schooling - it can't be shown scientifically to have an impact. But I'll bet he understands that it costs you more if you want fries with that burger. If he wants the BMW with the navigational system, he knows it will cost him more. Just like hiring an extra high quality teacher who will make a difference for a bunch of kids. But I digress...
This time, I think Hanushek is a little smarter.
Haunshek is worried by some of the discussion over multiple measures and argues for measures that are clear and, in fact, measurable. Me too. Some argue that school assessment ought to include such measures as portfolio assessment, writing assessments and public speaking. Talk about an undefined measures. These may well be the right idea - but as assessment goes - it will be difficult to agree that one teacher's assessment of student writing would match anther's.
It is an inter-rater reliability nightmare.
Hanushek also correctly points out the powerful effect that disaggregating student achievement data has had on forcing schools to focus on all groups of students, rather than hiding behind mean scores while persistently allowing the same groups of children to fail year after year...without accountability.
Without arguing whether federal imposition of NCLB's federal accountability system is actually constitutional - it would be the right way to go, even under a state accountability system.
And Hanushek concedes that adding a growth model improves NCLB.
But Hanushek only wants to look at data that focuses on "basic cognitive skills."
We collect a ton of data about the performance of our schools. School leaders across the country pour over that data (some of it demographic like graduation rates) gauging the relative success of school programs; looking for trends. The data tell a story.
Part of that story is what it will take for all school and all students to be successful; what it takes to close achievement gaps.
Perhaps what Hanushek really fears from an expanded data set, is not so much that it might damage kids, but what it might reveal; that it's not enough for us to simply declare our best aspirations - that no child will be left behind - reaching that goal is going to require a greater, more comprehensive effort, than simply putting the thumb screws to America's principals and teachers.
Of course, some of the broader discussion of multiple measures is at cross purposes.
Hanushek is thinking of an accountability system.
Many teachers look at assessment - quite properly - as a useful tool for guiding and adjusting instruction. The teachers are correct. But when policy makers wanted accountability, their heads were in a different place.
Of course, it would have been best if we first built a strong instructional assessment system which started with the content and then provided teachers timely feedback on their student's progress on specific objectives... and then, built an assessment system around that.
But it didn't work that way for political reasons.
Policy makers wanted accountability more than improved instruction...so that's where they put their effort. The two ought to be the same, but they are not.