View Mobile Site




Your Views: Automated essay scoring could lead to high-tech ways to fool it

POSTED: August 28, 2014 12:50 a.m.

As a writing teacher at UNG, I spend a great deal of time during the summer and into August thinking about the incoming group of students I will be closely working with during the fall semester. Currently, I have been thinking a lot about how much of my students’ writing has been graded by a computer and not a human, an unfortunate reality in this era of high-stakes testing.

I am not ready to turn over the grading of essays to computers and not just because I have seen “2001: A Space Odyssey” one too many times. The Educational Testing Service, which administers standardized tests such as the SAT, developed an e-Rater. According to a recent New York Times column, David Williamson, a research director for ETS, says e-Rater can grade 16,0000 essays in 20 seconds. I’m not sure how long these essays are, but by comparison, it takes me about an hour to make my way through three or four five-page student papers. 16,000 in 20 seconds saves a lot of time and money, which state legislatures and higher education administrators generally like. But it isn’t effective and negatively impacts the long-term writing of our students.

Automated scoring of essays privileges long windy prose and flashy vocabulary. It discriminates against English language-learners and writers whose native language is not English. It doesn’t check facts and finds it acceptable to write that the War of 1812 took place in 2001 and that the Atlanta Braves won the World Series in 1995 and 1996. In short, it doesn’t consider the communicative dimension of writing.

I follow the lead of Les Perelman, the former director of undergraduate writing at MIT who now devotes his time to testing algorithms of automated scoring of essays software. With the help of students at MIT and Harvard, Perelman developed a program he calls Babel, with which the touch of a button generates verbose prose centered on a selected keyword. In a recent Chronicle of Higher Education column, Perelman used Babel, then cut and pasted the generated prose into essay-scoring software used by the Graduate Management Admission Test. He — really Babel — received a 5.4 out of 6.

As college textbook publishers such as Pearson and college gatekeepers such as ETS continue lauding the benefits automated scoring of essays, I ask that we petition against the rise in automated scoring of essays. Some colleges are experimenting with computer grading of writing in large classes such as first-year writing classes, but let’s not let this happen at UNG.

Check out to see the case against computer scoring and to join your name to the more than four thousand people who are against taking the human dimension out of communication. It’s in the best interest of the thousands of students currently enrolled and hopefully one day enrolled at our institution.

Michael Rifenburg


Commenting not available.
Commenting is not available.





Contents of this site are © Copyright 2015 The Times, Gainesville, GA. All rights reserved. Privacy policy and Terms of service

Powered by
Morris Technology
Please wait ...