THIS FALL BROUGHT the first opportunity since 2019 for students in Massachusetts to enjoy a “normal” first day of school. The joy of learning is returning to Commonwealth classrooms, and preliminary reports suggest early signs of improvement in key areas like attendance.
Yet some numbers are down. As we learned Thursday, average student results on the Massachusetts Comprehensive Assessment System (MCAS) are lower than they were in 2019.
What does this dip in scores mean? Is this a sign that public education across the Commonwealth has declined in quality over the past three years? Have our schools gotten worse?
Let’s try a thought experiment. Imagine two relatively similar schools, with comparable pre-pandemic MCAS scores. Teachers at each school were trained in similar programs and possess equal levels of experience. Each school uses curricular standards and aligns classroom instruction with those standards. The materials each school uses are similar. Their buildings and grounds are almost indistinguishable.
Imagine that in one school community, students were insulated from the worst effects of the pandemic. Parents were able to work from home, oversee remote schooling, and offer additional support. Young people felt safe, and their families remained intact. Family resources were deployed for educational purposes and enrichment. The pandemic was a challenge, but one that was mitigated to a significant degree.
In the other school community, students felt the effects of the pandemic acutely. Family members became sick, were hospitalized, and may have even died. Working in so-called essential fields drew caregivers away from home during the day. Internet was often slow and unreliable, and students competed for quiet space with siblings. Young people felt vulnerable, frightened, and isolated.
Now send everyone back to school and sit them down for a test. Which school will appear to be the higher performer? Which school will seem like it has declined in quality?
The current dip in average MCAS scores is clearly a byproduct of the pandemic. It would make little sense to draw inferences about school quality from them.
And yet, the Department of Elementary and Secondary Education (DESE) has indicated that it will use MCAS scores to rank schools. To its credit, DESE has signaled that it will generally avoid descriptors like “underperforming” this year, as a part of its “accountability lite” model. However, it is planning to generate percentile rankings, which assign schools a 1-to-99 score based largely on MCAS results and related measures. So, while DESE may not directly penalize any schools for their MCAS scores, it will still engage in a type of unproductive labeling so often associated with traditional accountability systems.
We believe that is a mistake. But we also want to underscore the fact that the outsized use of test scores to measure school quality is not merely inappropriate in the wake of COVID-19. The impact of the pandemic simply offers a vivid illustration of a problem that has historically existed with our accountability system.
As research indicates, test scores are highly indicative of the inequalities that afflict our communities, and are not a valid basis for determinations about overall school performance. As scholars have repeatedly shown, the leading predictors of student standardized test scores are demographic variables like family income and parental educational attainment. This makes sense–students spend more time outside of school than inside, and access to opportunity remains highly unequal. Acting as if test scores are a reflection of schools alone requires us to pretend that inequality doesn’t exist, or that it doesn’t matter.
To be clear, we are not opposed to measurement. Nor are we seeking, as critics might suggest, to let schools “off the hook.” In Lowell, for instance, more schools improved their test-based percentile ranks than saw declines–a fact that could be used in marketing materials, were it actually indicative of school-level changes. We are simply opposed to pretending that MCAS scores can be used as an accurate measure of school quality.
We believe that educational data can be a powerful tool for school improvement. This past spring, we created a partnership between the University of Massachusetts Lowell and the Lowell Public Schools. Building on our work together in the Massachusetts Consortium for Innovative Education Assessment, the partnership is focused explicitly on data use at the school and district level. Our approach, however, is different from DESE’s in two fundamental ways. First, we are committed to assembling an accurate and unbiased picture of school quality, meaning that we look at a much broader range of indicators that correlate less strongly with demographic variables. And second, we have a different understanding of how schools improve. Rather than using data to rate and rank schools, the Lowell partnership seeks to empower educators and school leaders—giving them tools that enhance their professional capacity.
We also believe that MCAS scores can be used in productive ways. But for that to happen, we must ensure that the results are used responsibly. For instance, federal and state authorities might use student standardized test scores to assess the disparate impact of the pandemic on student learning. Such authorities might then target additional resources to schools and districts serving students who have suffered the most. Learning more about how to effectively channel aid to young people and their communities would be extraordinarily valuable. But to our knowledge, that is not currently on the table.
The Commonwealth’s current measurement and accountability system is narrowly conceived, focusing chiefly on standardized test scores and ignoring most of what good schools do. It is hopelessly biased against low-income students and students of color. And it relies on threats and punishment, which hasn’t been found to foster improvement in education, or in any other industry. A better system, which we are trying to model in our work in Lowell, would focus on continuous improvement in teaching and learning–empowering educators and school leaders with information, rather than weaponizing data against them.
There was nothing good about this global health crisis. Yet it has unsettled the status quo in ways that we couldn’t have predicted, including an opportunity to rethink measurement and accountability. We know that bureaucratic systems are resistant to change. But we’re hopeful that disruptions to MCAS will lead to new conversations about the role of high-stakes testing in Massachusetts. Our current approach simply doesn’t make sense, and right now that’s never been clearer.
Joel Boyd is superintendent of the Lowell Public Schools. He previously served as a superintendent in New Mexico and in school leadership positions in Boston, Philadelphia, and Miami. Jack Schneider is an associate professor of education at the University of Massachusetts Lowell. He is co-founder of the Massachusetts Consortium for Innovative Education Assessment and director of the Education Commonwealth Project.