Co-accountability in Indigenous program evaluation and service delivery - The Centre for Independent Studies

Co-accountability in Indigenous program evaluation and service delivery

Indigenous aboriginal_flagEarlier this year, in response to damning reports about the evaluation and funding of Indigenous programs, the federal government announced $40 million over four years to improve the evaluation process. However, as Fred Chaney, said at a Productivity Commission roundtable on evaluation: “The system under which we operate is broken, and it is the broken system that we should be evaluating.”

The broken system

Although both sides of government talk about the need for more evidence-based policy, history has shown that policies and programs are more likely to be based on ideology than actual evidence. The frenetic changing that accompanies each new electoral cycle can result in good programs being abandoned in favour of ‘new’ programs that repeat the mistakes of the past. Often governments do not want to heed the advice of research commissioned by a former administration and, with each new government, many valuable lessons about why programs have failed or succeeded are lost.

One program that has perhaps suffered more from the electoral merry-go-round than any other, is the former Community Development Employment Program (CDEP). When CDEP was initially implemented, it was at the behest of a remote Indigenous community in the Northern Territory, that wanted an alternative to what they called ‘sit-down’ money or the unemployment benefit. Back then, in 1977, CDEP was designed to be a community development program, but somewhere along the way its objectives changed and CDEP became an employment program. The government of the time referred to CDEP as a ‘stepping stone’ to employment.

However, an evaluation of the program found there were too many perverse incentives preventing people from transitioning to mainstream jobs from CDEP. When the Coalition government won the election in 2013, they abandoned CDEP in favour of a new program called the Remote Jobs Community Program (RJCP). Yet while CDEP was criticised for being too lenient and paying people wages when they did not turn up for work, RJCP went too far in the other direction and was far too punitive. RJCP was also found to have unnecessarily high administration costs.

Realising their mistake, the Abbott government reformed and rebadged the RJCP program, renaming it CDP. Some commentators have argued the name’s similarity with CDEP was a deliberate ploy to try and get community buy-in for the new program.  The pendulum shifts with CDEP/CDP illustrate the danger in going too far in either direction – too far one way and there is too much leniency and not enough accountability, and too far in the other direction and the model is too rigid and punitive and fails to take into account the individual needs of different communities.

A recent review by the Australian National Audit Office (ANAO) on the federal governments Indigenous Advancement Strategy (IAS) found the Department of Prime Minister and Cabinet had not implemented the Strategy effectively, and the grants administration processes “…fell far short of the standard required to manage billions of dollars of funding.”  This is not an isolated incident.  The former Northern Territory Co-ordinator General for Remote Services, Olga Havnen documented the failure of government funding and service delivery in her Remote Services Report in 2012:

“There are not only massive pre-existing service gaps but also a serious lack of high quality, evidence-based program and service development…This lack of long-term strategic vision means governments have spread resources as widely as possible in a ‘scatter-gun’ or ‘confetti’ approach. This results in partially funding community initiatives for short periods with no long term strategy for how the positions created or initiatives undertaken will be sustained.”

Havnen’s report also highlighted the way many organisations continue to be funded after evaluations had identified ‘serious deficiencies’ in their program delivery. However, whistleblowers are rarely rewarded, and following the release of her report Havnen was promptly sacked.

There are a number of examples where government has abandoned monitoring and evaluation of programs or deliberately ignored the evaluation findings. An inquiry into Aboriginal youth suicide in remote areas found $72 million dollars was spent trying to implement reforms following the Gordon Inquiry into child abuse, but this money could not be accurately accounted for. Monitoring was abandoned because it was deemed too difficult to track progress of actions against particular recommendations.

A recent evaluation of the cashless debit card trial (CDC) found three-quarters of all participants said the CDC had made no positive change to their lives and almost half of all participants said it had made their lives worse, but the government announced in the 2017-18 Budget that it will be extending and expanding the trials, with two new locations to be trialled from 1 September 2017.

Improving the evidence base and decision-making

If the government is serious about closing the gaps in Indigenous outcomes then it must start making policy decisions based on actual evidence not ideology. My research report ‘Mapping the Indigenous Program and Funding Maze’ found that of 1082 programs only 8% (88) were formally evaluated and my recent report  ‘Evaluating Indigenous programs: a toolkit for change’ found of those that were evaluated only 6% were done using rigorous methodology. Evaluations were ranked using a scale based on the best practice standards in the industry and the Victorian Government’s Guide to Evaluation, which classifies different evaluation and data methodology by levels of sophistication.

Yet while random control trials are often considered the ‘gold standard’ of research evidence, they are not always appropriate or practical. Estimating the counterfactual — what would have happened in the absence of the program, can be virtually impossible in Indigenous communities given the myriad of different programs. Moreover, there are few people significantly trained to carry out this type of evaluation in Australia.

There is also a difference between a health intervention and a program. While there may be evidence for the benefit of the intervention, there may not be evidence on how best to deliver that intervention as part of a program. A review of Indigenous health projects in WA found although there was strong scientific evidence for the health interventions, the actual delivery of those interventions varied considerably depending on the skill and capability of the staff.  Recent research also suggests it can take up to 17 years for health research findings to be adopted into practice.

There is no point conducting ‘rigorous’ evaluations if the evidence is not used. Instead of focusing on having the highest standard of evidence for assessing the impact of a program, it may be better to consider how to ensure evaluation learnings are used to inform program practice, similar to continuous quality improvement processes used in the health sector.

Currently, government departments may conduct evaluations to analyse funding distribution and to report on the achievements and impact of the program.  However, these types of evaluations can make organisations feel like they have to pass a test in order to continue to receive funding and they may resist the evaluation process as a result. Resistance can be indirect or subtle, such as avoiding or delaying entering program data into databases.

My research found organisations are more likely to engage with the evaluation process when it is presented as a learning tool to improve program delivery than when it is presented as a review or audit of their performance.   This approach differs from traditional ideas of accountability, and involves moving away from simply monitoring and overseeing programs to supporting a learning and developmental approach to evaluation.  The primary focus of developmental evaluation is adaptive learning to inform the implementation of programs or community development initiatives. This is very different from a government top-down technocratic approach, which might have strict accountability measures in place, but fails to recognise there may be better ways of delivering the program.

An example of a good developmental evaluation is a recent report by Social Ventures Australia on the Matu Leadership Program (MLP). The evaluation found the growth and development of the Leadership Group could not have occurred if the focus had remained on achieving fixed objectives. Although set outcomes and targets are often the cornerstone of evaluation, the success of the MLP demonstrates the unexpected positives that can arise from adopting a more flexible ‘learning and developmental’ approach.

Unfortunately governments can be afraid of experimenting with different program approaches, but often it is only through this process of trial and error that evidence about what truly works can be collected.  In addition, adopting a genuine ‘learning by doing’ approach can be a very accountable process, as evidenced by Malaysia’s National Transformation Program, which has a reporting and monitoring framework built into the program to enable regular reporting on outcomes and a process for escalating concerns.

 

People, and by extension programs, are not like an assembly line. Cookie-cutter solutions do not tend to work. So while government should set objectives for programs, they should not be overly prescriptive in how those objectives are achieved. Where there are national or state-wide programs, there needs to be a balance between maintaining program fidelity and allowing flexibility for local contexts.

The extra $10 million a year the federal government is taking from the Indigenous Advancement Strategy funding for Indigenous program evaluations will not go far. In fact, it will be possible to formally evaluate only a small proportion of the 1000 or so Indigenous programs the federal government funds. Nor would it be a good use of resources to formally evaluate small programs when the cost of the evaluation could outweigh the cost of actually delivering the program. In this situation, the government should provide funding and training for organisations to self-evaluate and for online data management systems to collect data.

Adopting a co-accountability approach to evaluation would help to ensure organisations receiving government funding are held accountable for how they have spent the money and whether programs achieve their desired outcomes, and government agencies are held accountable for monitoring whether organisations are meeting their objectives and working with them to improve their practices if they are not. The key point here is working with organisations to help them improve their practices. The best way this could occur is through an overarching accountability framework with regular feedback loops to monitor the achievement of outcomes, with the findings from those delivering the program on the ground being used to inform its ongoing implementation. A number of Aboriginal organisations have already recognised the valuable role that regular collection and analysis of data can play in improving service delivery and ensuring that programs meet participants’ needs. For example, the Waminda Aboriginal Women’s Health Service has a holistic and comprehensive service model, which includes both evaluation and CQI processes.

Going forward, the government should vest greater decision-making power in Indigenous communities by creating a co-accountable approach to service-delivery management and outcomes. In this framework, communities could hold the decision-making capacity as to how and where money is spent for services, according to each community’s individual needs.

Sara Hudson is a Research Fellow and Manager of the Indigenous Research Program at the Centre for Independent Studies. Her report Evaluating Indigenous programs: a toolkit for change  was published last week.