I WAS A 6th GRADE math teacher in 2000-2001 when the state rolled out the 6th grade math MCAS test for the first time. I remember being furious and frustrated with the test itself when I saw it. I thought the test was too hard. The work it expected of 6th graders was not aligned with my previous notions of what 6th graders could or should be able to do.
I was wrong. The 6th grade math MCAS was indeed a very rigorous test, particularly by standards of the time. But my 6th graders were more than capable of rising to that challenge. The test pushed me to expand my understanding of what my kids were capable of doing, and ultimately made me a better teacher.
Today, I am an administrator at a network of charter schools. I confess to having similar feelings when I saw our kids’ performance on the 2017 “next generation” MCAS. The test must somehow have been flawed. For context, our students had ranked among the top performing districts in the state under the “legacy MCAS” and PARCC tests. However, under the “next generation” MCAS, while our students still performed very well when compared to state averages and other Boston schools, our standing relative to top suburban districts had dropped considerably.
The test is not the problem. In fact, because it is more rigorous than the previous version, we recognize that the next generation MCAS provides us with an important opportunity to reset our expectations for our kids. We welcome the elevated rigor, despite the fact that it resulted in lower relative performance for our students this year. That’s because our goal has never been to help our students perform well on standardized tests in grades 3-8. Those tests are just a benchmark along the way that can signal how well we are preparing our students to succeed in college and beyond.
Although we are thankful for the opportunity provided by the state’s revised and more rigorous standards and assessments, we also recognize that the mere act of raising the bar will not by itself result in a better education for our kids. Those standards and assessments only provide us with goals and direction. Now comes the hard part: working to prepare our kids to meet the challenges spelled out in those standards and assessments. And nowhere is that work more pressing, more urgent, and more incomplete than when it comes to providing meaningful educational opportunity to our students of color and low-income students in communities across the state.
There is a debate to be had as to how we can best answer that challenge, and whether we can best do so through reform or revenue, or a combination of the two (see the 1993 Education Reform Act).
What is abundantly clear is that we won’t answer that challenge by continued dithering over whether our standards and assessments are just right. We can’t afford to continue to switch up our assessments from year to year. Doing so is the equivalent of spinning our wheels.
It should also go without saying, but we certainly won’t answer that challenge by ceasing altogether to measure the progress our kids are making. That idea should be a non-starter to anyone who is seriously invested in providing educational opportunity to all students across our state. We simply can’t close that opportunity/achievement gap if we stop tracking it altogether.
We’re fortunate to have the most rigorous standards and assessments in place that we’ve ever had. Now let’s get to work on the stuff that really matters. Elevating teaching. Empowering principals and schools. Focusing on the toughest, most rewarding, and important work that our schools can do: creating classrooms and schools where all of our kids can thrive.
Jon Clark is co-director of the Brooke Charter Schools in Boston.
CommonWealth Voices is sponsored by The Boston Foundation.
The Boston Foundation is deeply committed to civic leadership, and essential to our work is the exchange of informed opinions. We are proud to partner on a platform that engages such a broad range of demographic and ideological viewpoints.


Did Jon Clark ever respond to The Globe’s questions on the dark money, pro-Question 2 group he co-founded…Great Schools Massachusetts? And while Clark readily acknowledges Brooke Charter Schools’ performance on the 2017 MCAS 2.0 “dropped considerably” that little fact doesn’t show up on Brooke Charter Schools’ website. Those results were made available last October/November and yet, the charter’s website “In the News” section’s only entry for October 2017 and the most recent entry is U.S. News’ reference to Brooke’s addition of high school grades while under “Achievement” only the 2016 PARCC performance is highlighted.
three articles need to be understood about the experimental tests. Arne Duncan, John White (louisiana), Jay P Greene (U. Arkansas ) , and Deutsch and Diane Ravitch reviewing the whole batch.
what we say about the reliability and validity of NAEP is also what I say about the reliability and validity of MCAS…. https://curmudgucation.blogspot.com/2018/04/duncan-revises-again-courage-and-betsy.html
https://jaypgreene.com/2018/04/02/the-pre-spinning-of-naep-results/
https://deutsch29.wordpress.com/2018/04/01/louisianas-2017-naep-scores-must-not-be-pretty-john-white-writes-to-nces-prior-to-2017-naep-release/
I call BS on mr. clark’s article; and it is just a “puff piece”. selling you “stuff”. so it is a “puff stuff piece”.
Subject: psychometric “Fudge”
*Developing tests for assessing student’s abilities has been a part of the American educational scene for over a century. There are proven and established psychometric techniques to establish high degrees of reliability and validity for tests/products as reported by technical manuals (see for example, IOWA Test , California Achievement Test , Stanford, Metropolitan, etc) the tests we took when we were in school
*PARCC/MCAS and Smarter Balance should have provided similar evidence that they are proven tools for assessing students on credible “higher” standards. (as Mr. Clark claims they are)
*It is impossible to tell how accurate and precise these experimental tests are because evidence of their accuracy is not readily available. Publishers of these tests do make mistakes that are uncovered on occasion and revealed (less frequently) to the public paying the bill.
*A greater concern is the “cut scores” that determine proficiency levels and DESE establishes cut scores for MA schools. Setting cut scores is a subjective decision.” (This is where you really do a job on the Gateway Cities and the “test and punish ” mode kicks in. That is intentional because Baker and his ilk want to privatized the public schools)
*Comparisons with other state testing programs would have provided relevant comparisons for determining cost-effectiveness of MA testing program. This would have brought true transparency to MA emerging experimental tests that assumed great weight and expense over these past several years
*These faulty tests are now being used for high stakes consequences. The roll-out from design (logic plan) to implementation has been a disaster known as “test and punish”
*What do these experimental State test results tell us? Not a lot. It is a very blunt instrument….. I suggest we not even use them for Haverhill. If you can get anything at all out of the “quadrants”? but that is a very costly way to get data that is quite useless.
*Testing company (Pearson) fails to respond to objections from parents when they claim its tests are not adequate.
*What about the State legislature that has to approve these expenditures from DESE for state testing? (I write to the 3 Haverhill reps consistently telling them to stop wasting precious R&D money)
School administrators ned to be knowledgeable about psychometrics and be aggressively proactive in the public and political domain to CLARIFY all of these questions when parents are sold the “bill of goods” as Mr. Clark has done here. The damage done to the integrity of student testing is a direct result of this controversy and presents a serious challenge. (all the years I spent doing testing in special education and training school psychologists to do special education testing – – have been erased by the fiasco of MCAS and the PEARSON approach.)
So while Jon Clark writes about measuring progress with testing because “We simply can’t close that opportunity/achievement gap if we stop tracking it altogether” the Brooke Charter Schools’ Annual Report 2016-2017 clearly states: “We are certain that despite the strong recruiting season, we will not meet the gap narrowing targets for special education students and English language learners set by the department…we believe it is in fact mathematically impossible for Brooke to meet the gap-narrowing targets set out for special education and English language learners…” Jon Clark is all about public schools meeting all the gaps but not so much for Brooke Charter Schools to meet all the gaps.