Your donation will help us produce journalism like this. Please give today.
In a recent opinion piece in The New York Times, “When the Circus Descends,” David Brooks derided opponents of Common Core Standards, implying that they were ideologues on the far left and far right making “hysterical claims and fevered accusations.” But as I visit classrooms across the city talking to teachers about the Common Core, I don’t hear any hysterical claims or fevered accusations. I do hear one deep concern:
That the test will be a disaster.
Here’s the thing: I haven’t talked to anybody—anybody!—who objects to the actual Common Cores Standards. The Standards are incredibly vague; basically, they value the processes of analytical reasoning, reading, writing, speaking and listening. In other words, the Standards are like the education version of Peace, Love and Understanding. Who could possibly object?
The problem is that we are about to test the standards—and people do not like the tests. The teachers on the East Coast who are protesting, the children fleeing classrooms in tears, the parents forming an “Opt Out” movement are not a bunch of ideological clowns. They are angry because this new Common Core test, which has been implemented sight unseen by almost any teacher or principal, may have enormous power over their future without any serious public discussion. What are these tests, exactly? Do they even measure the standards?
Here in California, where the launch has been much slower, teachers across Los Angeles are administering a field test version of the Common Core test this month. There are two versions being rolled out nationally — here, it’s the Smarter Balanced Assessment Consortium test.
This morning, I took the online practice test for 11th grade English. Here’s a breakdown of what it is, what it isn’t and whether it measures the standards:
First, it’s not only multiple choice but it does have some multiple-choice elements. As on the California State Tests or the Verbal SAT, students are required to read short passages and answer multiple-choice comprehension questions. The content tends to be grounded in practical real-world concerns: whether a city should fund public art, whether teenagers should have a curfew, whether water should be fluoridated.
Sometimes students can pick more than one correct answer. The emphasis is on identifying the author’s main argument and analyzing the text for evidence supporting the argument, picking out sentences that support the main thesis and eliminating those that are irrelevant. Will this help our students read more deeply? Maybe—as long as it doesn’t become a mechanical search for related elements, without the deep intuitive thinking that makes ideas coherent.
Second, the test has some short answers and some longer written responses. In some places students have to construct short answers, which they type into boxes. In other sections, they have to write a persuasive letter or a short opinion piece based on an analysis of one or two texts. Though this option would seem to be much better than multiple choice questions, the effect may be similar depending on how these answers, letters and essays are graded.
Are students being graded on grammar? Spelling? Complexity of sentence construction? Correctness of the response? Are these responses being graded by a computer or a human being? If they’re being graded by a computer, what kind of sentences is the computer programmed to find acceptable? I’ve read that computer-graded essays (yes, in some places essays are being graded by computer) give higher scores for longer words regardless of context. Will teachers be pressured to coach their students to fill their essays with preposterous words?
Most likely, the essays will be graded by moonlighting teachers using a rubric, or chart, of the desired qualities. Is this a good thing? I don’t know—how good is the teacher? And how is this test actually any better than the SAT writing test that was just thrown out because, according to the College Board, it was “not predictive” of student success? How is it better than the California State Exit Exam writing test taken by all students for graduation? Or the EAP writing test they all take for college placement? Our students already take writing tests up the wazoo. Nobody seems to think it has helped them write, and a lot of teachers (including me) think it has made their writing worse.
The problem with standardized tests is just that– they’re standardized. In other words, preparing for them tends to produce formulaic writing, reading and thinking, the very writing, reading and thinking that caused us to despair that our students couldn’t write or think, and made us adopt the Common Core in the first place. These new tests definitely rely more on the seeking of evidence and the analysis of arguments. Studying for them will, at the lowest level, probably inculcate the mechanical production of those skills, which is better than not having them at all.
So if I say I’m underwhelmed, I hope I’m not being a clown. Yes, the practice test embodies some aspects of a reductive form of analytic thinking, in that Flatland two-dimensional sense that seems to dominate so much of educational thinking these days. But does it really measure what we want our students to learn? Take the online practice test yourself. What do you think?
Maybe the best test of Common Core skills will be our ability to have a civilized public discussion of how they measure up to our ideals of what an education should be.
Ellie Herman is a guest commentator. Read more of her thoughts at Gatsby in LA.