Home » Commentary » Opinion » NAPLAN sets bar too low to spot students at risk of failing
· AFR
Just as tinkering with a barometer doesn’t actually change the weather, updating NAPLAN test benchmarks won’t solve the underlying crisis in Australian education: too many teachers aren’t equipped to be highly effective and too many students don’t get the support they need.
Following last week’s meeting of education ministers, student achievement will now be reported against four proficiency levels: “exceeding, strong, developing, and needs additional support”. This replaces NAPLAN’s usual 10 grading bands, which also mark students against a national minimum standard.
The test for these changes will be that they provide better information to parents and teachers and that this leads to better action on underachievement. But on both counts, the changes risk being a failure.
Adopting a proficiency benchmark has potential to better identify students’ needs, provide clearer signals to parents, and raise standards. In theory, higher benchmarks should translate to higher academic standards. And with it, higher teaching standards and expectations within schools and from families.
But for this to work, transitioning from numbered bands to proficiency levels must include language that clearly describes each benchmark.
Yet, some linguistic choices, such as “developing” proficiency, are a light touch for defining achievement that is, in fact, not at the proficient level – since below proficiency is, by definition, not developing. And defining what appears to be the proficient level as being “strong” risks further confusing NAPLAN’s users.
Instead, ministers should have taken the opportunity to opt for an unambiguous scale: highly proficient, proficient, below proficient, and well below proficient.
It’s long been clear that the previously defined “national minimum standards” are exceptionally low: in effect, indicating functional illiteracy or innumeracy for a student at a given age. But because this benchmark has been so low, it also means many students who clear it – those achieving at the national minimum level – should also be considered at educational risk, though they’re not always treated as such.
The problem has been that national minimums have been interpreted as equivalent to a satisfactory level, rather than one signalling significant underachievement. By way of comparison, about four in 10 Australian 15-year-olds don’t reach the national proficient standard in the OECD-run Program for International Student Assessment. But only about one in 10 students in Year 9 are below NAPLAN’s national minimum standard.
Still, changes to reporting will be effectively meaningless if they’re not matched with better intervention for underachievement. On this count, the track record has been disappointing.
The unfortunate truth is that teachers and parents have not consistently and effectively acted when NAPLAN shows students’ underperformance. A 2019 review found NAPLAN was rarely used to inform day-to-day teaching practice, and many parents didn’t know when and how to support children who need extra help.
Disappointingly, a recent Productivity Commission report showed the vast majority of students who fall behind the minimum benchmark never go on to exceed the minimum benchmark later in schooling. The majority of these students aren’t from disadvantaged backgrounds or those suffering from learning difficulties. They are students who need, but don’t receive, consistent high-quality teaching.
That’s not because teachers don’t want or care about their students’ learning. It’s because too many haven’t been equipped to deliver as effectively as possible.
A December report found most Australian teachers believe they’re using evidence-based practices in the classroom, but the same number are not up to date on evidence-based, scientifically informed teaching.
NAPLAN should signal the need for intervention and remediation, so new reporting must be matched with schools’ capacity to actually turn around poor outcomes. The new underachievement benchmark may help in this goal, but only if teachers have the tools to effectively intervene.
While it’s true that the test of assessment isn’t for results to improve – years of mixed NAPLAN results show this – it’s also true that national educational improvement can’t be achieved without credible and rigorous measurement and monitoring.
To this end, NAPLAN’s overdue but necessary renovations over recent years have largely been good.
For example, the test is now fully online and gives a better reading of students’ capabilities. And students now sit NAPLAN earlier in the school year, with results available more quickly – in theory, giving teachers and parents more opportunity to act on areas of need.
These refinements have occurred despite the relentless retrograde campaign against NAPLAN and standardised assessment more broadly. Yet, successive independent reviews have reached a consensus that while NAPLAN is generally valuable and here to stay, its diagnostic capacity is underutilised.
It will take both better testing and better teaching for Australia to turn around the recent history of disappointing education outcomes.
Glenn Fahey is program director in education policy at the Centre for Independent Studies.
Photo by Andrius Šimkus
NAPLAN sets bar too low to spot students at risk of failing