Home » Commentary » Opinion » Misreading the data will not help the teachers
· The Australian
Outdated teaching methods based on disproved theories remain widespread despite the abundance of good and easily available information on effective, evidence-based instruction.
The gap between research and practice is an enduring and critical challenge in education — nowhere more so than in how to teach reading. Many children in developed countries with high levels of education spending have low literacy when almost all children can learn with good instruction.
What is preventing the uptake of proven teaching methods in classrooms? The Reading Recovery program gives an almost perfect illustration. It is arguably the most widely used intervention for children who need such support in the early years of school.
Developed in New Zealand by Dame Marie Clay in the 1970s based on her theories about how children learn to read, it is used in thousands of schools around the world. Its advocates are strongly committed to the belief that it helps the children who participate. Its critics say that there is no good evidence that the program works, and its teaching methods do not reflect what we now know about how children learn to read.
In this case, lack of evidence doesn’t mean lack of research. Reading Recovery has been the subject of dozens of studies over several decades.
Much of the research is low quality in terms of evidence standards. But some recent research is more rigorous, including longitudinal studies published in Australia, the US and England in recent years.
A large Australian study published by the NSW Centre for Education Statistics and Evaluation in 2016 involved more than 20,000 students. It found that children who had participated in Reading Recovery in Year 1 performed worse on the Year 3 NAPLAN reading assessment than a matched sample of students who had not participated in the program. That’s right. Worse.
After up to 20 weeks of daily one-to-one 30-minute lessons with highly trained teachers, these children ended up with lower reading ability than peers who had similar reading ability at the start of the study.
As a result, after years of ignoring researchers in Australia and New Zealand who had been loudly and unswervingly warning that Reading Recovery was not effective for most students, the NSW government finally stopped providing dedicated funding for it.
Nevertheless, despite some of the clearest findings in educational research, public and non-government schools around Australia have continued to fund the program from discretionary budgets. They are convinced that it works, and any new piece of research that appears to confirm that belief is seized upon.
New findings published in Britain last year would appear to vindicate the loyalty of Reading Recovery acolytes. In reality, however, it only proves the lengths that Reading Recovery supporters will go to in order to defend it, even to the extent of obfuscating data.
The latest UK Every Child a Reader study, conducted by academics from University College London and funded and published by the KPMG Foundation, was launched with great fanfare at the House of Lords in December. The report claims to show that Reading Recovery in Year 1 was responsible for high scores in the General Certificate of School Education 10 years later.
The KPMG Foundation commissioned an economic analysis which estimated a £1.2 billion ($2.2bn) boost to the economy if all struggling readers were given Reading Recovery.
However, closer scrutiny of the latest report revealed a methodological mystery — a group of students present in the five-year follow-up study published in 2012 were missing from the 10-year study. The missing children comprised an entire group of more than 50 students (about 20 per cent of the sample) who had formed a second comparison group in the original study and in the
five-year follow-up. The omission of this second comparison group is neither acknowledged nor explained in the 10-year study report.
Why is this a big deal? Because the data from the missing second comparison group completely undermines the conclusions drawn in the published report.
To explain: In the original study, there were three groups of students. Two groups of students came from a set of Reading Recovery schools. Some of the students in the Reading Recovery schools did Reading Recovery in Year 1 (RR group) and some did not do Reading Recovery (RRC). A comparison group of students was drawn from a set of non-Reading Recovery schools (CC).
In the five-year follow-up study, the three groups were compared on their results in the Key Stage 2 (KS2) curriculum tests, taken in Year 6 of primary school. There was no statistically significant difference in the KS2 scores of the two groups of children in Reading Recovery schools (RR and RRC). Both of these groups had significantly higher KS2 scores than children in the non-Reading Recovery schools (CC).
That is, in Year 6, the children in Reading Recovery schools outperformed the comparison students irrespective of whether they actually participated in Reading Recovery.
This indicates that any advantage of the students in Reading Recovery schools was not attributable to participation in Reading Recovery — it must have been due to something else about those students, those schools, or both. In the published version of the 10-year follow-up study, only two groups are compared — the students who did Reading Recovery (RR) and the comparison group from non-Reading Recovery schools (CC).
The students in Reading Recovery schools who did not do Reading Recovery (RRC) are omitted. The RR group had markedly higher GCSE results than the CC group, allowing the authors to conclude that “the positive effect of Reading Recovery on qualifications at age 16 is marked in this study and suggests a sustained intervention effect.”
Having remembered that the five-year study was much less straightforward and conclusive, I wrote to the lead author of the study — Jane Hurry — and asked about the missing group. The professor replied with the explanation that she had written two versions of the 10-year follow-up study, one that included the second comparison group results and one that excluded them. KPMG Foundation chose to publish the latter.
Hurry readily provided me with the copy of the alternative unpublished version of the 10-year follow-up report. It shows that there was no difference in GCSE scores between students in the set of Reading Recovery schools who had done Reading Recovery and those that had not (the missing RRC group). Both of these groups had significantly higher scores than the children in comparison schools.
Again, this means that the higher GCSE scores of children in the set of Reading Recovery schools was not due to participation in Reading Recovery. Children from the same schools who had not done Reading Recovery had performed just as well.
Tolerance for poor evidence standards in education is not a victimless crime. The total cost of implementing ineffective reading programs is much larger than the budget allocated to teacher training and teacher time.
There are enormous and tragic opportunity costs for the children involved, with profound impacts on their educational achievement and wellbeing.
Jennifer Buckingham is a senior research fellow at the Centre for Independent Studies and director of the not-for-profit FIVE from FIVE reading project.
Misreading the data will not help the teachers