Thursday, June 11, 2015

Who didn't see this coming?

Majority of schools inflated program review scores

KDE not adjusting inflated scores

This from the Courier-Journal:

The Kentucky Department of Education says schools are overscoring themselves on their program reviews, which account for more than a fifth of a school's overall accountability score.
High-stakes assessment + Human Nature = Home cookin'

"It's human nature," Associate Commissioner Amanda Ellis told the Kentucky Board of Education during its meeting this week. "If there's something you can control ... you want to give yourself the benefit of the doubt."

The department audited programs at eight schools across the state, including Jeffersontown High School, to evaluate the consistency of their self-ratings on their K-3, arts and humanities, practical living and writing programs.

It found that, 63 percent of the time, the school overscored its program.
For instance, a school may have scored its program as proficient in one indicator, while the state audit team deemed it as needing improvement.

Ellis said the audits showed that schools did not understand the process for evaluating their programs. She also noted the audited schools were happy to get feedback on how to improve their scoring processes.

Some board members decried the scoring disparities between the schools and the auditors, with board member Sam Hinkle calling the results "awful" and asking whether there are "any consequences for consistently telling yourself that you're better than you are."

Ellis said the state will work with schools to improve their programs and their reviews. It is planning to pick 24 schools this fall for program review audits.

Jeffersontown Principal Marty Pollio commended the audit team's efforts but added he felt that, overall, his school was doing an accurate job scoring its programs and disputed any innuendo that his school was trying to inflate scores.

Of 13 indicators that the audit team examined, it agreed with Jeffersontown on nine of them, Pollio said. He said that the team told him it agreed with the school's rating on three of the remaining four, but said it needed more evidence.

"We were pretty close to accurate," Pollio said. "It's our first time through this. People on the committee really tried to honestly assess where we are. We're still proud of our results."

Pollio added that the audit helped his school recognize areas where it could improve its review process. "I think everyone wants to give an honest assessment of programs," he said.

The legislature-mandated program reviews are designed to help schools improve programs like arts that don't lend themselves well to paper-and-pencil tests. Teams within the school are supposed to do the reviews.

Last year, for the first time, some of the program reviews factored into schools' 2013-2014 state accountability scores, with the scores on those reviews counting for more than a fifth of a school's overall score. This year, as last year, the program reviews will account for 23 percent of the overall score.

As some had predicted, the results of those 2013-2014 program reviews were generally high. Indeed, the majority of schools in Jefferson County rated their programs as proficient or better.
Some in Kentucky said they were not surprised by the high scores or the auditors' findings that schools had overscored themselves.

"What else do you think would happen when those scores are used to hold schools and staff accountable?" Richard Innes, an education analyst for the Bluegrass Institute, a free-market think tank, said in a blog post about the board meeting.

Department staff said no scores were changed as the result of this audit.
However, Ellis said that if schools' scores are "extremely inflated again after they already received an audit, then to me that's an automatic red flag to send them on for a possible testing violation."

1 comment:

Bringyoursaddlehome said...

Not a surprise here, self evaluation for accountability scoring should never be part of the equation. They knew they were in over their heads after the first year when they fell back from annual reviews to rotating reviews every four years. Also, lets not put all the fault on the schools. The state basically gave almost no training on how to apply or use the rubrics and the rubrics themselves are often vague in their descriptors/demonstrators. If this was such an important component and weighted so heavily in the school's composite score then why wasn't more time invested in supporting training and use. The deal is KDE created another big accountability component which they don't have the resources or manpower to ensure its accuracy or implementation Like I said, no surprise here.