in the past few months, President Obama, Gov. Deval Patrick, and the press have practically made “education reform” synonymous with “firing teachers.”
The president praised a Rhode Island school superintendent for firing high school teachers. Patrick proposed legislation to make it easier for superintendents to dismiss teachers in underperforming schools. The US Department of Education requires that many of the schools eligible for school-improvement money dismiss half their teachers. Along similar lines, prominent supporters of education “reform” are pushing charter schools, merit pay, and school choice.
What all these so-called reform initiatives have in common is the assumption that teachers in low-performing schools have the tools they need to turn their schools around but, for some reason, are refusing to use them. We therefore need to use the carrot (merit pay) or the stick (losing their students via choice or to charter schools, embarrassing them by publicizing their students’ low MCAS scores, or firing them).
This view—that the right incentives (positive or negative) will produce the necessary changes in teaching—may be a very common one, but there is no data to back it up. Indeed, a close look at MCAS results shows there is surprisingly little difference between the quality of teaching in so-called “good” schools (wealthy, suburban schools with high MCAS scores)and “bad” schools (inner-city schools with low scores) when the results are averaged across all teachers in the district and disaggregated by student demographics, specifically race and poverty. Put another way, a low-income white student in a “good” suburban school tests essentially the same as a low-income white student in a “bad” inner-city school.
The implications of this finding are enormous: It suggests that the policies we are pursuing are unlikely to make much of a difference, because they don’t address the real problem.
What’s the point of getting rid of half the teachers at an inner-city school if the ones who replace them also lack the necessary tools? Similarly, replacing a public school with a charter school won’t by itself make any difference; either way, teachers need help, not blame.
They need help not because they do a poor job of teaching, but because they work with very needy children.
My analysis is based on MCAS results by type of district (wealthy suburban, low-income urban, etc.) and by type of student (non-poor whites and Asians, low-income blacks and Hispanics, etc.) The analysis was done separately for English language arts and for math. The results were similar so the charts presented here focus on English results.
The measure of MCAS success is the proficiency index, created by the Massachusetts Department of Elementary and Secondary Education (DESE). The index runs from 0 to 100 with a higher score indicating a higher degree of proficiency on the test.
To understand more about the differences in MCAS performance between wealthy suburban districts and low-income urban districts, I divided Massachusetts school districts into four groups —wealthiest, medium wealth, medium low, and poorest. The schools are grouped by their stress ratios, where the stress ratio is the combined percent of students who are low-income, minorities (blacks and Hispanics), and of limited English ability. Since these are counted separately, the stress ratio can be as high as 300 percent. Each group has roughly the same number of students.
The largest districts in each group are shown in Table 1. The wealthiest group consists of districts with a stress ratio less than 12.5 percent and includes Wachusetts Regional, Billerica, and Andover. The medium wealth group includes districts with stress ratios between 12.5 percent and 27 percent and its largest members are Newton, Lexington, and Bridgewater-Raynham. In the grand scheme of things, the wealthiest and medium wealth groups are not very different from each other.
The medium-low group includes districts with a stress ratio between 27 percent and 90 percent and its largest members are Plymouth, Quincy, and Taunton. The range in this group is very large; Plymouth at 27.1 percent is at one end of the spectrum and Chicopee, at 83.1 percent, is at the other.
The poorest group includes districts that have stress ratios greater than 90 percent. At 164 percent and 172 percent, respectively, Boston and Springfield are the largest two districts in this group. Almost all their students are either poor or minorities and half are both. The demographic challenge they face is far greater than any of the districts in the wealthiest two groups.
Chart 1 shows average scores for each group of districts; these are three-year averages of English scores in all grades. The chart appears to confirm (but only at first glance) the popular view that poor teaching in urban schools is a real problem. The wealthier districts score well—the proficiency index is around 90—and the poorest districts—at 74— do not.
To understand what’s really going on, we’ll need to look at the demographics of students in each group. Chart 2 shows statewide average scores for five demographic groups. Whites (and Asians) who are not poor and who speak English have the strongest results (89.2). Blacks and Hispanics with the same economic and language background—that is, who are similarly non-poor and speak English—score less well, by about 8 points (81.3). Blacks and Hispanics who speak English, but who are also poor, score lower still—by another 8 points (73.8). Finally, blacks and Hispanics who are low-income and do not speak fluent English have the lowest scores of all (52.8).
As Chart 2 makes clear, low-income, minority status, and limited English proficiency are each separate factors impacting student performance. That is why the stress ratio counts them each separately and why students are broken out into these five separate groups.
Students in these five groups are, of course, not distributed uniformly across the four groups of school districts. The demographic breakdown of the state as a whole and of each of the district groups is shown in Chart 3.
The first two columns of the chart show that the difference between the two wealthiest groups is relatively small. The demographic composition of the medium-low group is very close to that of the state as a whole. And, most important, the demographic make-up of the large urban districts is very different from the other three groups. Given the limited language skills, vocabulary, and general knowledge all too many youngsters from minority and low-income homes bring with them to school, the challenges the urban districts face are far, far greater than most people realize.
Chart 4 shows MCAS results broken down by student demographics and by the four groupings of districts. What it shows is that the differences across the four groupings of districts—the different colored bars within each section of the chart—are far smaller than the differences across student types—that is, from one group of bars to the next. Put simply, average student results depend much more on student demography than district type.
Consider low-income white students, the second section of the chart. There is almost no difference between the scores of these students in the wealthiest districts (82.8) and in the poorest districts (77.5). Put another way, teaching quality is pretty much the same in the suburbs as in the inner city.
The differences between district groups are somewhat greater with non-poor blacks and Hispanics (from a high of 88.0 in the wealthiest districts to 78.3 in the poorer districts). This isn’t surprising: For starters, there’s much more variation among non-poor students, who run from children of college professors to children of blue-collar workers. Also, the decision about where to live (or whether to send children to suburban districts via the METCO program) undoubtedly reflects differences in parent education and motivation that are not captured by income data alone but that nonetheless have a big impact on student performance. Finally, the difference in classroom peers (there are a lot more favorable role models in suburban than urban schools) must also influence student performance. Although the MCAS data alone don’t allow us to quantify and measure these factors, it seems fair to say that if they could be taken into account, there’d be basically no difference in average teaching quality across district types.
Another way to look at this data is that the scores of non-poor whites in the inner-city districts are higher than the scores of low-income students—white or black—in the wealthiest suburbs.
One way to summarize all this information is to calculate what the scores would be in each district group if the composition of their student body were the same as the state average. In this calculation, for example, the wealthiest districts are still credited with an average score of 82.8 for their low-income white (and Asian) students, but this is applied to 12 percent of the students (the state average for this group) instead of 4 percent (the actual percentage in the group).
This works in reverse for the urban districts. Their score of 77.6 for non-poor whites is applied to 66 percent of their students (the state average) and not 18 percent.
The results of these calculations are shown in Chart 5. The red bars are the same as in Chart 1, showing actual scores for each district type. The blue bars show what the overall average scores for each district type would be if they all had the same demography.
The overall proficiency index in English for the wealthiest two groups would have been 88.9 and 87.1, respectively, instead of the 92.6 and 89.8 they actually scored. And the index for the poorest districts would have been 82.3, not 73.4. Put another way, there’s a 19-point difference in actual scores, but this would have been only 6.5 points if they all had the same demographics. Essentially, two-thirds of the difference between average scores in the wealthiest districts and the poorest can be explained by demographics alone. As pointed out earlier, much of the rest could be the result of differences in parental motivation and education and also in classroom peer groups.
In both wealthy districts and urban districts, there are stronger teachers and weaker ones. Some of the differences between individual students are undoubtedly the result of these differences in teacher ability. The point here is that the difference in average scores across districts as a whole have far more to do with differences in demographics than differences in teacher quality.
Imagine what would happen if all the teachers in, say, Wellesley and Lowell, were to switch places. There would still be stronger and weaker teachers in each district. But the strong suggestion from the data is that there would be very little difference in average student outcomes—in either district.
Better tools needed
Far from minimizing the importance of good teaching, these findings underscore the importance of helping teachers learn the pedagogies that can move their students forward. Whether they are in Weston and Lexington, on the one hand, or in Holyoke and Lawrence, on the other, the vast majority of teachers do not have the tools necessary to meet the needs of low-income and minority students. As we’ve seen, scores of low-income black and Hispanic students in wealthy districts are far lower than scores of non-poor whites in those districts—and only barely higher than the scores of low-income minority students in the inner cities. The need for this kind of help is particularly important in the inner-city schools not because teachers there are somehow poorer teachers or care less, but simply because so many of their students are so very needy.
The problem with today’s popular remedies—like merit pay, charter schools, and firing teachers—is that they are about carrots and sticks, not about giving teachers better tools to meet student needs.
These results absolutely do not mean that children from disadvantaged homes are incapable of performing at high levels. We know from countless examples around the country that, when teachers are properly prepared, students can perform at high levels.
One example of this is the Bay State Reading Institute, which I chair. The institute has been working with Massachusetts elementary schools since 2006, training teachers to use data to guide instruction, use research-based pedagogy, individualize instruction to teach each child at her level, help children think critically about what they read, and help principals provide extra help to struggling readers and to be effective education leaders of their schools.
The institute has extensive data on 282 first graders in the six schools that began working with us in the fall of 2006. Of the 282 students, 48 percent are low-income and 29 percent are minorities. The students were assessed three times each year in oral reading fluency and took the MCAS exam as third graders in the spring of 2009.
In the fall of their first grade year—the first year of their schools’ partnership with the institute—just over a quarter of first-grade students were at high risk (red) in fluency (Chart 6). Two years and eight months later—at the end of third grade—57 percent of those high-risk students were at benchmark (green) in oral reading fluency. Of the 45 students who rose from red to green, 71 percent were low-income, minorities, or both. This is a very hopeful message—good teaching was able to erase the fluency deficit for over half of these first-graders!
Interestingly enough, the same is not true for students at the beginning of second grade. Because of the gains made in first grade, a year later only 14 percent of these same students were still at high risk as the cohort entered second grade. However, very few—only 18 percent—of these high-risk second graders were able to achieve satisfactory reading fluency by the end of third grade. What this means is that you can’t help low-performing students become successful readers unless you start in first grade.
Fluency is an important—but not the only—factor in predicting successful reading comprehension. Chart 7 looks at the inter-relationship between reading fluency and comprehension. None of the students who remained at high risk in reading fluency at the end of third grade were proficient or advanced on the MCAS exam given that same spring. By contrast, 51 percent of the students who were at high risk at the beginning of first grade and whose fluency moved up to green during the intervening three years were proficient on the MCAS.
It is instructive to compare the students whose fluency increased from the start of first grade to the end of third grade with those who were in the green all along. Of the latter group, fully 75 percent were proficient on the MCAS. That is, the earlier a student reaches proficiency in reading fluency, the greater her chances of success on the MCAS. Statistics alone can’t tell us why this is the case, but the most likely explanation is that success in reading comprehension is as much about vocabulary and general student knowledge as it is about the mechanics of reading. And the sooner a student becomes a fluent reader, the more time he has to use his reading skills to build vocabulary.
There is no doubt, then, that improved reading fluency leads to improved comprehension, and that strong instruction during the primary grades can make a huge difference in the reading success of children who come to school poorly prepared.
There’s a very important lesson here—school turnaround cannot be accomplished in a year or two, and requires the cooperation of teachers over several grade levels.
Most of the policies currently in place are unlikely to make much of an impact on student performance. If virtually all teachers lack the training and tools necessary to address the needs of low-income and minority students, firing one group of teachers and replacing them with others who are similarly unprepared will not make any difference.
Concluding from raw test scores that inner-city teachers are somehow “bad” teachers and teachers in suburban schools are “good” is grossly unfair to teachers in urban schools, the vast majority of whom work very hard and care deeply about their students. Worse, insulting teachers (unfairly) is no way to motivate them to change.
For sure, teachers in urban schools need help. But they need help not because they are poor teachers, but because of the great challenges they face.
Merit pay—particularly individual teacher merit pay —is unlikely to work. This is because the underlying problem is not teachers’ motivation, but rather their lack of training. Offering them more money to do what they don’t know how to do is a recipe for frustration, not for success.
School turnaround cannot be done quickly or on the cheap. It requires a long-term partnership over several years. This would mean concentrating enough money on any given school to make a difference, rather than spreading literacy funds across hundreds of small grants unlikely, by themselves, to make a lasting difference.
Because success takes several years, too much emphasis on short-term changes in MCAS results may backfire, since it diverts attention from the long-term changes necessary for success.
Gov. Patrick set out to be the “education governor.” Given the budget choices he’s made—consistently favoring education over other areas of state government—there’s no reason to doubt his sincerity. But he—or his successor—is unlikely to succeed if he continues to base his policies on a fundamentally flawed understanding of what is holding schools back.
Edward Moscovitch is president of Cape Ann Economics and chairman of the Bay State Reading Institute.