Computers Grading Common Core Essay Questions? What Could Possibly Go Wrong?

shutterstock_215447869.jpg
  • Image courtesy of shutterstock.com

The high stakes tests for Common Core are supposed to emphasize thinking more than the tests for, say, AIMS. One way to do that is to emphasize written analysis as an important component of the tests. But grading papers is a slow, expensive process. You have to hire people, train them and make sure the essays are scored by multiple people to reach an acceptable level of consistency in the grading.


PARCC (part of Pearson Education), which is putting together one of the tests states can choose from, has a solution. Let computers grade the essays. From Politico:

The PARCC exams are designed to challenge students to read closely, think deeply and write sophisticated analyses of complex texts. But hiring people to read all that student writing is expensive. So Pearson's four-year contract to administer the exams bases the pricing on a phase-in of automated scoring. All student writing will be scored by real people this coming spring. The following year, the plan calls for two-thirds to be scored by computer. The year after that, all the writing is scheduled to be robo-graded, with humans giving a small sampling a second read as quality control.

Some states are having a little trouble with the robo-grading concept, so PARCC spokesman David Connerty-Marin said the states are "conducting studies" to see how well it all works . . . except, not quite.

[Connerty-Marin] later acknowledged that states aren't doing their own studies; they're relying on the Pearson report.

Right. And pharmaceutical companies should run the tests to decide whether their new drugs are safe and effective.

Get ready for the next wave of beat-the-test tutoring: How to fool a computer into thinking you're writing something important even though most of it is nonsense and gibberish.