...Harvard University researcher Daniel M. Koretz has a new book, Measuring Up: What Educational Testing Really Tells Us. Koretz contends that NCLB has prompted widespread teaching to the test, and gaming of the high-stakes testing system producing scores on state standardized tests that are substantially better than students’ mastery of the material.
I have little doubt that Koretz is correct that high-stakes testing prompted some folks to try to game the system. But to illustrate his point, he uses Kentucky's KIRIS assessment which was discontinued by 1998, three years before NCLB. This is a curious choice if one hopes to illustrate a current problem with state systems and NCLB.
Mr. Koretz pointed to research in the 1990s on the state standardized test then used in Kentucky, which was designed to measure similar aspects of proficiency as the National Assessment of Educational Progress, the federally sponsored testing program often called “the nation’s report card.”
Scores on both tests should have moved more or less in lock step, he said. But instead, 4th grade reading scores rose sharply on the Kentucky Instructional Results Information System test, which resembled an NCLB-test prototype, from 1992 to 1994, while sliding slightly on NAEP over the same period.
I'm not sure how it is Koretz determined that KIRIS was in lock step with NAEP. As I recall, the biggest problem with KIRIS was that the damn thing wouldn't hold still - that, and the fact that Kentucky did not even have an underlying curriculum at the time. As Commissioner Thomas Boysen frequently said, we were building the airplane as we flew it. KIRIS was Exhibit A. It changed every year.
Was the NAEP similarly unstable? I don't recall that it was.