Showing posts with label American College Test. Show all posts
Showing posts with label American College Test. Show all posts

Monday, August 10, 2009

Most Ky. juniors aren't ready for college, ACT results show

Are our 4-year olds ready for 1st grade? College readiness is an important issue for Kentucky, but when I was high school junior, I wasn't ready for college either. I'm just sayin'.

Here some background on the ACT from KSN&C:

Skip Kifer's pieces - To Predict or Not to Predict and A Narrow View of Student Potential

Ben Oldham's piece - Adding Understanding to ACT Scores

And my little rant - What the Bluegrass Institute Doesn't seem to know About the ACT

Susan Weston Spanks the ACT's PLAN test here and CPE President Robert King here.

This from the Herald-Leader:

Kentucky's public high school juniors improved their math scores slightly on the ACT this year, but scores on the test in other subjects remained flat or fell slightly.

The 2009 ACT scores also show that less than half of public high school juniors in Kentucky are ready to do college-level work in English, algebra and other subjects...

Overall, state ACT scores changed little from 2008. The report suggests no dramatic overall trends, because this is only the second year that all juniors in Kentucky public schools have been required to take the ACT. The test assesses students' ability to complete college work and is considered the most widely accepted college entrance exam.

Kentucky is one of only a handful of states requiring the test, which assesses English, reading, mathematics and science skills. Each subject is scored on a scale of 1 to 36. The test is administered statewide on the same day.

Monday's ACT report drew a variety of responses from educational experts around the state.

Richard Day...said that with only two years of scores, it's too soon to declare a trend. Even if a trend were spotted, Day said, it would be tough to assign a meaning.

"What would you attribute that to? The ACT curriculum is not what Kentucky teaches," Day said.

Rather, Day said, the ACT is one of several measures for evaluating student performance, designed to distinguish "between good students and very good students." ...

...Lisa Gross, spokeswoman for the state education department, said the relatively low level of college preparation might reflect that many juniors hadn't taken upper-level courses yet.

Robert King, president of the Kentucky Council on Postsecondary Education, said that the state's educational bureaucracy is trying to figure out how to align high school curricula so that students will arrive at college better prepared. The step is one of several educational changes required under legislation passed by the General Assembly this year.

As things stand now, King said, Kentucky students can do everything that's asked of them in K-12 and not be competitive in college. The goal, he said, is for colleges to communicate what they expect incoming students to know, and for Kentucky's elementary and secondary education system to teach it...

...Yvonne Baldwin, an administrator at Morehead State University, cautioned against putting too much stock in ACT results.

"Relying on the ACT as the sole measure of college readiness is a trap, because it gives too much power to the test," Baldwin said. "I think we need to go back to teaching high school in high school, and college in college. ... The national trend data says that a rigorous high school experience is the best indicator of college success. And I don't think that's happening for a lot of students."

ACT RESULTS FOR PUBLIC SCHOOL JUNIORS RELEASED

Overall results from the 2009 administration of the ACT to Kentucky’s public school juniors show a small improvement in mathematics, but minor drops or flat scores in other subjects.

As mandated by KRS 158.6453, all of Kentucky’s public school juniors participate in the ACT, which assesses English, reading, mathematics and science and is scored on a scale of 1 to 36. The cost of the exam is paid for by state funds. In spring 2009, 43,511 public school juniors took the ACT.

KRS 158.6453 mandates that Kentucky’s public school students participate in the Educational Planning and Assessment System (EPAS) from ACT. The state assesses public school 8th graders using the EXPLORE test, public school 10th graders with the PLAN test and public school 11th graders through the ACT.

Senate Bill 1, passed in the 2009 session of the Kentucky General Assembly, mandates that ACT results be included in school and district accountability results in the 2011-12 school year.

ACT provides information about high school students who take recommended core courses, which include four years of English, three mathematics courses, three social studies courses and three science courses over the four years of high school. Students who indicate that they are taking the core courses or more generally score at higher levels on the ACT.

2009 KENTUCKY JUNIORS’ COURSE-TAKING PATTERNS/ACT COMPOSITE SCORES

ACT developed College Readiness Benchmarks in English, mathematics, science and reading, with research indicating that students who reach those have a 50 percent chance of obtaining a B or higher or about a 75 percent chance of obtaining a C or higher in the corresponding credit-bearing college course. The benchmark scores are:

§ 18 or higher on the ACT English Test
§ 22 or higher on the ACT Mathematics Test
§ 21 or higher on the ACT Reading Test
§ 24 or higher on the ACT Science Test

For the Kentucky public school juniors who took the ACT assessment in 2008 and 2009:

Score information for schools and districts may be found here.

SOURCE: KDE press release

Friday, April 10, 2009

Frankfort to Aim at Illusion

A couple of weeks ago Frankfort Superintendent Rich Crowe said of the lack of accountability for writing under Senate ill 1, “If they’re not going to score it, and if it’s not going to count for anything, I don’t know why we should waste our time.”

Now liberated from state imposed accountability system he is building his own; and he's pointing his teachers toward a target that doesn't quite exist. Crowe told the Frankfort Independent Board of Education in March that he’s inclined to focus on curriculum areas that actually count toward the ratings. Then he chose the ACT.

KSN&C contributor Skip Kifer recently noted,

ACT produces results presumed to help students learn and school personnel make better decisions. Does it? What is the evidence that the information is used? What is the evidence that using the evidence makes a difference? On what?

The problem is that there is no fixed curriculum underlying the ACT that teachers can be assured will be on the test. If students are taught a skill, and perform well on items that test that skill, the ACT throws the test items out in favor of something the teachers haven't taught so well.

This from the State Journal:

New curriculum will center on ACT success

Frankfort Independent teachers will stay in the classroom a week longer than their students this summer, building a new curriculum aimed at success on the ACT college entrance exam.

Teachers will spend June 15 to 19 in subject-related “professional learning communities,” Superintendent Rich Crowe said Thursday at a meeting of the Board of Education.

The goal is to base a kindergarten through 12th grade curriculum on the concepts students must know to reach national benchmark ACT scores – the minimum score required to avoid remediation courses in college...

So despite broad objections from teachers and parents about schools wasting time teaching for the test, Frankfort Independent is getting ready to teach to a different test - one with an elusive target.

One wonders what curriculum the teachers will focus on?

Evidence suggests that the best preparation for doing well on the ACT includes having ones students come from a family that has resources and values education. I'm not sure how the faculty is going to work that out.

Will Frankfort teachers be held accountable if students don't "measure up?" What does it mean to measure up on a benchmark test like the ACT? ...score at the 50th percentile? ...60th? ...40th?

As Kifer notes,

Benchmark tests may or may not be related to what students have been taught. Alignment studies have to be conducted to determine if they are and to what extent.

Since the questions are not released, a teacher does not know what has been done well or poorly. Students have no idea how well they are doing because the questions and the right answers are never discussed. There is no way to tie the results to what a teacher has been teaching because the test, at that time, does not necessarily reflect what has been covered in the curriculum.

Ben Oldham recently wrote that the promoters of the ACT have been over-interpreting its results - making claims that just haven't been shown to be true. But when pressed to prove their points, the promoters go away. Such misuse has even led The National Association for College Admission Counseling Commission to question the test's use by colleges. To the extent that colleges quit using the ACT, its only utility goes away.

Kifer warned folks not to get snookered by ACT officials new claims that without having changed the nature of the test, their scores now tell whether a student "meets expectations" or is "ready" to attend college.

Crowe doesn't seem to be listening.

Monday, September 29, 2008

What the Bluegrass Institute Doesn't Seem to Know about the ACT

There is no way to sugar coat this.
Somebody doesn't know what he's talking about
- and it's not Oldham.

Well, Ben Oldham and I got called out by Richard Innes of the Bluegrass Institute the other day for "ignoring" the ACT benchmarks scores - apparently the holy grail of assessment in his mind.

Of course, this is all part of a larger conversation about Senate Bill 1 and the on-going Task Force on Assessment at KDE.

I wasn't planning on getting into all of this this fall. It's tedious, inside baseball kind of stuff. But the fundamentals are still the same. First, it's just a test. Second, every test has been designed to do a specific job. If test designers wanted a test to do something else, they would begin with that fact in sight. Third, have I mentioned it's just a test?

OK, let's talk about the ACT.

Here's the problem with the American College Test: Nothing, really.

It is a well-designed test intended to help admissions officers at competitive colleges determine which students are most likely to be successful at the university level. Ben Oldham recently went further saying, the "American College Test (ACT) is a highly regarded test developed by a cadre of some of the best measurement professionals in the world and is used by a number of colleges..."

But the ACT is only ONE factor that colleges use to make such determinations.

Why?

I mean, if the ACT can predict success in life, as Innes un-credibly argues (below), why don't colleges simply rely on it and quit wasting time compiling grade point averages and other data they say they need to made the best choices for their school?

The answer lies in the fact that test data are only reliable up to a point. It's just a test score and it shouldn't be turned into anything more.

As Richard C. Atkinson and Saul Geiser recently pointed out,

the problem with general-reasoning tests like the SAT [and ACT] is their premise: that something as complex as intellectual promise can be captured in a single test and reflected in a single score. It is tempting for admissions officers--and parents, legislators, policymakers and the media--to read more into SAT [and ACT] scores than the numbers can bear. Although measurement experts know that tests are only intended as approximations, the fact that scores frequently come with fancy charts and tables can create an exaggerated sense of precision.

And such exaggerations persist.

Newspapers and bloggers rank scores that ought not be ranked - because people like rankings. Some "think tanks" act as though test scores equal "truth" and look for any opportunity to twist data into a pre-existing narrative that Kentucky schools are going to hell in a handcart - this, despite trend data to the contrary, about which they are in full denial.

Georgetown College Distinguished Service Professor Ben Oldham correctly warned that, "since the ACT is administered to all Kentucky juniors, there is a tendency to over-interpret the results as a measure of the success of Kentucky schools. His excellent article clarifies what the test is, and what it isn't.

The problem of over-interpretation has been somewhat exacerbated by the inclusion of benchmark scores in the ACT. But benchmarking does not change the construction of the test nor the norming procedures. It does not turn the ACT into a criterion-referenced exam as Innes tries to suggest - unless all one means by "criterion" is that the ACT derived a cut score. Under that definition a People Magazine Celebrity Quiz could be considered criterion. Socre 18 and you're a Hollywood Insider!

The ACT's "criteria" simply does not measure how well Kentucky students are accessing the curriculum. It is much more sensitive to socio-economic factors attributable to most of the college-going population.

Using a "convenience sample" of schools (those willing to participate) the ACT looks at student success in particular college courses; and then looks at the ACT scores obtained by "successful" students. But regardless of what data such a design produces, "there is no guarantee that it is representative of all colleges in the U.S." Further the ACT "weighted the sample so that it would be representative of a wider variety of schools in terms of their selectivity."

That is to say, they tried to statistically adjust the data produced by the sample to account for more highly selective schools as well as the less selective. This process of weighting data to produce a score that the original sample did not produce should be viewed suspiciously. It would be like...oh, let's say like....using a concordance table to give students a score on a test they didn't take.

If KDE had done anything like this, Innes' buddies at BGI would be crying "fraud."

If we are going to test, and if our tests are going to be used to determine placement in programs within schools, and eventually in college, then we need to understand what the ACT means when it says "college-ready." And we don't. The most important flaw of the ACT benchmarks is conceptual: What is "readiness" for higher education?

As one delves deeped into the statistics other problems arise. Skip Kifer who serves on the Design and Analysis Committee for NAEP told KSN&C,

The benchmark stuff is statistically indefensible. Hierarchical Linear Modeling
(HLM) was invented because people kept confusing at what level to model things
and how questions were different at different levels. The fundamental
statistical flaw in the benchmark ... is that it ignores institutions. Students
are part of institutions and should be modeled that way.

But the ACT models at the "student" level when it should be modeling at the "students nested within institutions" level.

It is possible that the ACT took a kind of average of those "correct" models but that can not be determined that from their Technical Report.

Perhaps Innes could help us understand: How is it that the ACT's benchmarks could have been empirically defined and yet managed to get the same relationship for the University of Kentucky and Lindsey Wilson College?

Unfortunatley, the ACT folks did not respond an inquiry from KSN&C.

But none of this will likely stop the exaggeration of the ACT's abilities.

In response to a KSN&C posting of Ben Oldham's article, Innes made the following claim:

Oldham pushes out-of-date thinking that the ACT is only a norm-referenced test. The ACT did start out more or less that way, years ago, but the addition of the benchmark scores, which are empirically developed from actual college student performance to indicate a good probability of college success, provides a criterion-referenced element today, as well.

"Criterion-referenced element?!" A cut score? The ACT is a timed test too - but that doesn't make it a stopwatch.

So, Oldham is old fashioned and out-of-date? Au contraire. It is Innes who is over-reaching.

Innes argues,

the ACT says that many employers for ... better paying jobs now want exactly the same skills that are needed to succeed in college. So, the Benchmark scores are more like measures of what is needed for a decent adult life. Thus, it isn’t out of line to say that the Benchmarks can fairly be considered a real measure of proficiency. And, that opens the door to compare the percentages of students reaching EXPLORE and PLAN benchmarks to the percentages that CATS says are Proficient or more.

Bull.

One could derive as much "proficiency" evaluating Daddy's IRS form 1040 and then comparing percentages of students reaching EXPLORE and PLAN benchmarks to the likelihood of owning a BMW or affording cosmetic surgery.

I'm afraid what we have here is something other than a nationally recognized assessment expert who is out-of-date.

We have a pundit who thinks the ACT benchmarks constitute a criterion-referenced assessment of the performance of Kentucky students and their prospects for a decent adult life!? This, absent any connection between the ACT and Kentucky's curriculum beyond pure happenstance. There is no relationship between a student's ACT score and any specified subject matter - which is typically the definition of a criterion-referenced test.

There is no way to sugar coat this. Somebody doesn't know what he's talking about - and it's not Oldham.

The best spin I can put on this is that Innes got snookered by ACT's marketing department, which seems to do a fine job, but has been known to overstate the abilities of ACT's EPAS system.

But none of this makes the ACT a bad test. It just means that assessment experts have to take care to understand the nature of the exams and not to rely on them to do too much.

And it is commendable that Kentucky is working toward building an actual relationship between Kentucky's curriculum and that of the ACT through the development of content tests. That work will get Innes closer to to where he wants to be. He should wait for the actual work to be done before making claims.

Just as Atkison, Geiser, Oldham, Kifer, Sexton and virtually everybody else says, the results should not be over-interpreted to suggest relationships that just aren't there. And trying to argue causal chains that are completely unproven is certainly not best practice.

But more to the point, Kentucky recently committed to use the ACT's EPAS system including EXPLORE and PLAN as yet another measure - a norm-reference measure - of student performance. As long as Kentucky is cognizant of the test's limitations we ought to strengthen the connections between Kentucky high schools and universities and gauge student readiness for college. It was because of the large numbers of college freshmen in need of developmental courses that CPE pushed for the ACT/EPAS system to begin with.

Kifer wonders why Kentucky's Advanced Placement (AP) Tests receive so little attention. After all, unlike the ACT, the AP tests are a direct measure of a high school student's ability to do college work; AP courses are particularly well-defined; the tests exist across the curriculum; good AP teachers abound; course goals and exams are open to scrutiny.

When a high schooler passes an AP test he or she not only knows what it means, but the school of their choice gives them college credit for their effort.

Aware of CPE's commitment to the ACT as one measure of student readiness, KSN&C contacted newly named Senior Vice President of the Lumina Foundation Jim Applegate, who until recently served as CPE's VP for Academic Affairs.

Here's what Jim had to say:

Richard,

The article recently referenced in your publication from the admissions officer group addresses the use of ACT for college admissions. The
organizations sponsoring assessments such as ACT, SAT, and others have made clear that no single standardized test should be used to make such decisions. Postsecondary institutions, to implement best practice, should use a
multi-dimension assessment to make admissions decisions. A test score may play a
role in these decisions, but not the only role.

Kentucky uses the ACT/EPAS system (the Explore, Plan, and ACT tied to ACT ‘s College Readiness Standards) to help determine college readiness, place students in the right high school courses to prepare them for college, and place them in the right courses once they go to college. Kentucky’s revised college readiness standards are
about placement, not admission. For the foreseeable future, the postsecondary
system will, as it has always done, accept large numbers of students with ACT
scores below readiness standards, but will provide developmental educational
services to these students to get them ready for college-level work. The large
number of underprepared students coming into Kentucky’s postsecondary system led the Council a couple of years ago to initiate an effort to improve developmental
education order to make sure these students receive the help they need to
succeed in college.

A growing number of states are adopting the ACT or the entire EPAS system to more effectively address the challenge of getting more high school graduates ready for college or the skilled workplace (e.g., Colorado, Illinois, and Michigan). These states also want to better understand the performance of their students in a national and international context. Globalization no longer allows any state’s educational
system to remain isolated from these contexts.

The use of ACT/EPAS is, of course, only one necessary strategy to improve the college/workplace readiness of Kentucky’s traditional and adult learners. Kentucky is working to implement statewide placement tests in mathematics, reading, and English that will be administered to high school students who fall below statewide college readiness benchmarks tied to ACT scores (few states have gotten this far in
clarifying standards to this level). These placement tests will provide more
finely grained information about what students need to know to be ready for
college-level work. We are also working to more strongly integrate college
readiness goals into our teacher preparation and professional development
programs to ensure teachers know how to use the assessments beginning in middle
school to bring students to readiness standards.

The postsecondary system is hopeful the implementation of ACT/EPAS will promote partnerships between postsecondary and high/middle schools to improve student achievement. Some of that has already begun since the first administration of the EPAS college readiness system. For the first time in my time in Kentucky (I grew up
here and returned to work here in 1977) we now know where every 8th grader is on
the road to college readiness thanks to the administration of the Explore. If in
five years the number of students needing developmental education is not
significantly less than it is today then shame on all of us.

Jim Applegate

All of this reminds me of the old Crest Toothpaste disclaimer I read daily while brushing my teeth over the decades.

Crest has been shown to be an effective decay preventive dentifrice that can be of significant value when used as directed in a conscientiously applied program of oral hygiene and regular professional care.

Let's see if I can paraphrase:

The ACT/EPAS system has been shown to be an effective norm-reference assessment that can be of significant value when used as directed in a conscientiously applied assessment program based on clear curriculum goals, direct assessments of specific curriculum attainment and effective instruction from a caring professional.