Tuesday, October 3, 2017

Why I Keep Writing About Those Damn AzMERIT Scores

Mark as Favorite

Posted By on Tue, Oct 3, 2017 at 12:43 PM

It's something of an obsession with me, writing about AzMERIT scores. A new set of scores, a new use of the scores, a new news story about the scores, and there I am with another post or two or three. So here's yet another post, a rambling discussion on why the tests, the way they're reported on and the way they're used drive me nuts.

Let me start by getting something out of the way. The tests in and of themselves aren't bad. They give a reasonably accurate reflection of students' abilities in reading, writing and math. During my last few years teaching in a district outside of Portland, Oregon, I had to give the Oregon version of the high stakes standardized tests to my sophomore English classes. I did a pretty good job of predicting what my students' scores would be based on what I had learned about their reading and writing abilities during the eight months before the tests, which means the test scores generally reflected the students' skill levels. There were a significant number of exceptions, where students got higher or lower scores than I thought they would, which tells me the tests aren't always accurate on an individual level. But when you're looking at large numbers of students, and assuming everything is on the level—no "helpful encouragement" from teachers during the tests, no erase-and-replace of students' answers by staff after the students hand in their tests — their average scores tell you something about their skill levels relative to other groups of students.

Now, with that out of the way, the problems. The first is, the high stakes nature of the tests distorts the schools' curriculum and, sometimes, the test results. Since teachers, schools and school districts are judged by their students' scores, they're compelled to do everything they can to get the best results possible. That means teaching to the test, which means spending inordinate amounts of time and energy giving students the narrow skills needed to fill in the right bubbles. The give and take of loosely directed discussions is a luxury only to be indulged in when time allows. Creative pursuits, long term projects, even time on the playground are secondary to the central focus of the classroom: preparing students for test day. Teachers become mechanical skill-and-drill sergeants, which is not what they thought they signed up for when they decided to join the teaching profession. Students are encouraged to become robotic, learning how to be successful at performing variations of one repetitive task — answering short questions by picking the right answer from a short list of possibilities. The classroom is a different place — I would say a worse place — thanks to high stakes tests. And, sad to say, all that sweat, toil and tedium generally only adds a few points to students' scores and even less to students' actual skill levels, and since pretty much everyone is doing it, it's a wash. Every class, school and district's ranking in the state stays pretty much the same as it would have been if no one paid any attention to the test until test day.

And sometimes, the pressure to raise test scores leads individual teachers, or whole schools and districts, to cheat. Some schools and districts have been caught at it. Teachers and administrators in Atlanta went to jail for changing answers on student tests year after year. Others do it but haven't been caught. A series of articles in USA Today a few years back talked about a nationwide analysis of erasures on student tests and found that in many schools, including in Arizona, the number of wrong answers erased and replaced by right answers was as likely to be random as it was likely that the school be struck by lightning on test day. Though state departments of education rarely look deeply into suspicious scores, Arizona's ADE found nine schools where the evidence is strong enough, it's highly probable students' test papers were altered. Most likely, those schools are the visible tip of a larger problem. And that's just the most easily detectable form of cheating. There are lots of undetectable ways to boost scores without increasing the students' skill levels.

Cheating can become addictive, and additive. If a teacher cheats one year, how does he/she go back to being honest the next year without having to explain the drop in scores? If third grade teachers cheat, fourth grade teachers look bad if their students score lower than they did back in the third grade—and so on, up the grades. Educators are basically an honest, moral, but not necessarily courageous lot. If you put their salaries and/or their jobs on the line, many of them are liable to do what it takes to push those scores up.

The other part of the problem happens outside the schools. It's the way the scores are interpreted and used. The general public see high scores and think "good schools" and "good teachers." They see low scores and think "bad schools," "failing schools" filled with "failing teachers." If the public doesn't come to that conclusion by itself, the privatization/"education reform" crowd is quick to assure them that's the way it is, because they want to disgrace and dismantle the public school system, and they want to cripple the unions that support teachers. Condemning "failing schools" is a two-fer for the anti-public education crowd. Actually, it's more like a three-fer, or maybe a four-fer if there's such a thing, because they can use the "failing school" meme to push charters and vouchers, the two main tools in their "Dismantle public schools" took kit.

Unfortunately, the media too often plays into the hands of the anti-public school crowd, praising districts with high scores and condemning districts with low scores.

Connecting parental income and education to high scoring and low scoring schools is condemned as making excuses for bad teaching. The usual retort is, "You mean you think it's OK for only 20 percent of the students to pass the AzMERIT test?" So the problems of poverty are pushed to the background, or, worse, they're blamed on the schools: "Students who go to failing schools end up in poverty. We need to fix the schools so those poor children have the education they need to make something of themselves." Voila! Society is off the hook for the scandalous level of income inequality and the inexcusable level of poverty in a country as prosperous as ours. If it's all the schools' fault and it has nothing to do with the way our society is structured, that means the way we as a country treat people at the bottom of the socioeconomic ladder is just fine. It's the schools' failure, not ours.

If you equate test scores with the quality of education and educators, the logical conclusion is, "Failing schools" are filled with failing teachers and administrators, while high scoring schools must have faculties and administrations that know how to get things done. Welcome to the wonderful world of self fulfilling prophecy. Schools in high income areas are already more attractive to teachers for a variety of reasons, but if you add the idea that teachers will be branded failures if they teach at "failing schools," that stacks the deck against those schools even more. It becomes increasingly hard to attract teachers to those schools—meaning teacher vacancies will be concentrated there—and more and more, the teachers in those schools will be people who couldn't get jobs at the "good schools." And, why "throw money" at those bad schools where it will just be wasted by a staff that has no idea how to teach kids? Better to reward successful schools by giving them more computers and science labs, and working air conditioners and working toilets. Negative societal perceptions about schools with low income students lowers the quality of schools, which means they increasingly earn the label, "failing schools," which means fewer teachers want to work there and less money is spent there, which means . . . the circle goes round and round as the schools spiral downward.

You'll find no better example of the misuse of scores to label schools as successful and failing than Arizona's "Results-based funding" plan which goes into effect this year. The idea is to reward "successful schools" for their success by giving them more money. Those schools will be able to give their teachers raises of $2,250 or more and have lots left over to buy educational goodies the rest of Arizona's schools can only dream of. Naturally, teachers will flock to those schools, meaning every classroom will have a certified teacher cherry-picked from multiple applicants hoping to be among the select few. The students will have the best teachers Arizona can buy, along with newer textbooks, supplemented by more state-of-the-art computers and other educational supplies than schools not making the "successful school" cut. And how will success be determined? By scores on the AzMERIT test, of course. Other factors will come into play, especially the first year of the program, but it's clear schools filled with children from the state's most privileged families will be very well represented on the results-based funding list.

We get very little value from the tests, but they cause a serious amount of damage to our students and our public schools. That's why I write about those damn AzMERIT scores so often, and will continue to do so when the occasion arises. Expect two more posts, at least, in the near future.

Tags: , , ,

Mark as Favorite