Is There Light At the End Of the Standardized Testing Tunnel?

Forbes is not my go-to source for educational news and insight, but you get news and insight where you find it. In this case, it's from a 39-year-veteran high school English teacher, a fellow English teacher who outranks me by five years. I have to pay attention to what he says, right?

The headline asks, Is The Big Standardized Test A Big Standardized Flop? The answer, according to the writer, is yes, and teachers knew it when the testing craze began ramping up 20 years ago. The people who didn't catch on were leaders of the education reform/privatization movement. Now a few of them are beginning to.

The author cites the work of two conservative educational scholars, Jay Greene and Frederick Hess. Greene is head of the Department of Education Reform at the University of Arkansas (If the "education reform" in the department title and Arkansas as the location aren't clues enough to Greene's conservative educational leanings, let me add for longtime followers of the Goldwater Institute and my posts, Matthew Ladner, ex-education guy at the Goldwater Institute and current senior research fellow at the Charles Koch Institute, has been a frequent contributor to Greene's blog.) Greene says, rightly, test scores aren't valuable in and of themselves. They are supposed to be predictors of success in students' future lives. The problem is, they're not very good at it.
If increasing test scores is a good indicator of improving later life outcomes, we should see roughly the same direction and magnitude in changes of scores and later outcomes in most rigorously identified studies. We do not.
And he goes further, saying test scores and VAM (Value Added Measurement) don't tell us much about the quality of the schools or the programs the students are enrolled in.

Read this next statement by Greene all the way through. It's a biggie.
I think almost every credible researcher would agree that the vast majority of ways in which test scores are used by policymakers, regulators, portfolio managers, foundation officials, and other policy elites cannot be reliable indicators of the ability of schools or programs to improve later life outcomes.
Shorter version: test scores aren't worth a hell of a lot. Worse, they're being misused by almost everyone who uses them. To which I can only add: Wow.

Hess, director of education policy studies at the conservative American Enterprise Institute, comes to a similar conclusion. He writes,
So, how much do test scores really tell us, anyway? It turns out: A lot less than we’d like.
Hess says a rise in test scores may have less to do with students' actual understanding of subject matter than the amount of time spent learning to take the test, often at the expense of other subjects, the test taking skills students learn, and various forms of gaming the system, including cheating.

I've read work by Greene and Hess in the past. Though I tend to disagree with them, they are generally honest brokers who want to make a point by proving it, not through deception. So I'm not surprised to see them look at the evidence on standardized testing and come to a conclusion that may not sit well with their "education reform" colleagues, or even with what they would like to believe.

Which brings us back to teachers. They have been attacked for being against standardized testing because it makes them look bad. Turns out, the classroom practitioners knew what they were talking about all along. It's taken some academics, especially those who have used testing as a blunt instrument to bludgeon public schools and push their charter/voucher agenda, 20 years to catch up. (Many other academics have seen through the testing fiasco from the start.) It's going to take even longer for the less honest members of their community, and longer still for states to move these tests to the back burner, then off the stove and onto the ash heap of educational history.

As Hess says near the end of his column (and as I've said many times), if we get rid of the yearly ritual of standardized testing, we'll still have the NAEP (National Assessment of Education Progress), which many people on different sides of the education aisle agree is the best, most reliable test we have. It doesn't take up class time. It's administered to a sampling of students every few years, and you can't teach to it even if you tried. The results of the test, which began in the 1970s, give a more valuable snapshot of student achievement than all the high stakes tests combined.