November 07, 2024
Column

SAT-for-MEA switch troubling, not soothing

The Maine Department of Education, as reported in this paper, is “99.9 percent sure” that it will replace the 11th-grade Maine Educational Assessment with the college admissions test, the SAT. We agree with the Aug. 29 Bangor Daily News editorial that, at the very least, such a change in assessment policy deserves much more review.

In fairness, we acknowledge that this policy shift is not without merit.

First, it would mean one less test for the 75 percent of high school students in Maine who already take the SAT. Second, because the state plans to pick up the SAT tab, a college-bound student will save the $41.50 it otherwise would cost to take this test. Finally, because the SAT has consequences (at least for the college bound), more students would be motivated to do well on this test than presently is the case with the MEA. Given the No Child Left Behind requirement that every school makes “adequate yearly progress” on the state test-with severe sanctions awaiting schools that don’t – it is a good thing to have a test that students take seriously.

But for us, the proposed SAT-for-MEA switch is more troubling than soothing. Here are our primary concerns:

. The two tests serve different purposes. As a standards-based test, the MEA is designed-question by question – to measure and report on student achievement of the Maine Learning Results. In contrast, the purpose of SAT is to predict college grades.

Unlike the MEA, the SAT is a norm-referenced test: Each score is expressed relative to a national norm (somewhat like grading on a national curve), questions are not tailored to the Learning Results (or to any other state’s standards), and these questions are designed to produce the greatest spread of scores possible so that the SAT can do its intended job (prediction). For example, a question on which most students succeed is unlikely to appear on a norm-referenced test like the SAT because the question doesn’t contribute to score spread.

Yet, such a question is precisely what one would expect, or at least hope for, where it corresponds to an important part of the curriculum that therefore gets a lot of attention in the classroom. For this reason, this question is entirely admissible on a standards-based test like the MEA (as is an equally important question that the majority of students nevertheless got wrong). Consequently, the MEA paints a more accurate portrait of student achievement with respect to state standards than is possible with the SAT. It’s about validity.

. A reasonable response to this first concern would be to “augment” the SAT with state-developed questions drawn from the Learning Results, which in fact may be required by the federal law if Maine proceeds in this direction. But augmenting the SAT would be a complicated and expensive process. In any case, augmenting does not address the presence of questions on the SAT that depart from the Learning Results – and that students therefore have not had opportunity to learn – yet nonetheless figure into one’s score.

. In Department of Education policy discussions, the MEA has always been seen as ultimately providing an external check on student achievement of the Learning Results. If the MEA and a school district’s local assessments give widely divergent results, for example, a red flag goes up. The SAT, by virtue of its design and announced purpose, would not be credible in this role.

. Because SAT scores would be used for determining whether a high school meets the No Child Left Behind requirement of “adequate yearly progress” (AYP), instruction likely will be influenced by what the SAT assesses – and does not assess. But because of the inevitable instances of mismatch between SAT content and the Learning Results, teachers would be teaching to the wrong test. Consequently, those features of the Learning Results that are absent from the SAT may receive less instructional attention than they presently do, just as SAT-tested objectives that diverge from the Learning Results may take up instructional space when before they did not.

. The SAT is the wrong test for judging high school AYP. This not only is because of the test’s questionable match with the Learning Results. Using the SAT for AYP purposes would be patently unfair to schools whose students cannot afford private instruction to bump up scores through intensive coaching on test-taking strategies. This is more than unfair: It perverts the very idea of AYP and school accountability.

. Finally, we find dubious the argument that the SAT-for-MEA switch will increase college attendance. Three quarters of Maine high-school students take the SAT, yet considerably fewer go on to college – a discrepancy suggesting that the source of the college-attendance problem lies elsewhere (e.g., financial considerations, the perceived relevance of college). In any case, the vast majority of Maine high school sophomores now take the PSAT – a close cousin of the SAT – which should alert the unsuspecting student that he or she indeed has the right stuff.

Given the fundamental difference between the MEA and SAT, we cannot see how the latter can stand in for the former – unless the Learning Results and standards-based assessment are no longer central to Maine education policy.

Theodore Coladarci is professor of education at the University of Maine. He was a member of the Maine Department of Education’s Technical Advisory Committee 1995-2004, which he chaired 1999-2004. Robert Ervin is superintendent of schools, Bangor, and also former member of the Technical Advisory Committee.


Have feedback? Want to know more? Send us ideas for follow-up stories.

comments for this post are closed

You may also like