The hidden costs of research assessment exercises: the curious case of Australia

Research assessment exercises provide the government and wider public with assurance of the quality of university research, with the guiding principles being accountability, transparency, and openness. But is there the same accountability and openness when it comes to the public cost of these large-scale exercises? 

Ksenia Sawczak examines the situation in Australia as the research sector looks ahead to the new Engagement and Impact Assessment later this year. There seems little doubt this exercise will demand significant resources, with no guarantee it will achieve its stated goal of improving how universities engage with industry. Until the hidden costs of assessment exercises are revealed and a thorough consideration of their general utility is undertaken, questions will remain as to whether they are a responsible use of public monies.


Across Australia, universities are in the throes of pulling together submissions to Excellence in Research for Australia (ERA) – the comprehensive assessment exercise, overseen by the Australian Research Council (ARC), which serves as a stocktake of all research conducted in Australian universities. 2018 marks the fourth running of this gigantic exercise since its introduction in 2010, and the high frequency of the exercise means universities are now well-accustomed to the never-ending cycle of submissions, with preparations for a new round commencing as soon as the previous one ends. Apart from ERA, Australian universities undertake a multitude of other regular reporting to government in research-related areas, such as the Australian Bureau of Statistics’ Survey of Research and Experimental Development and the Department of Innovation’s National Survey of Research Commercialisation, although the submission data is not subject to evaluation.

Assessment exercises are, of course, nothing new. What is curious about Australia is that ERA functions in a vacuum, conducted by government but playing no role in informing federal research policy or funding allocations to universities. Above all, its role is to provide the government and public assurance of the quality of research being undertaken in Australian universities, with the guiding principles being accountability, transparency, and openness.

What is even more curious, though, is the one-sided adherence to these principles. This was never more apparent than in a subtle change made by the ARC to guidelines for the 2018 round that has implications with regard to openness, yet nearly went unnoticed; namely, the decision not to collect information from universities on the amount of time spent on preparing submissions. This important information was collected as part of the last exercise in 2015, but the ARC has refused to make it publicly available on the grounds it was not part of the evaluation. Clearly, the figure must have been astronomical, and one that is not in the ARC’s interests to disclose, nor to seek information on again.

But for an exercise so firmly grounded in principles of accountability, transparency, and openness with the public – something particularly evident in prefaces to the National Reports the ARC releases after each ERA, which strongly emphasise the importance of assuring the public of the value of government investment in research – the missing element is the public cost of such an endeavour.

The themes of accountability, transparency and openness are even more pronounced in this year’s ERA round, with the ARC indicating it will increase the volume and type of information available for the public by releasing submission data. While it is not known what this will entail (and questions of privacy have been raised as matters of concern), it is clear the ARC is committed to making sure the public has as much information as possible on how universities are performing across disciplines and across different elements of the exercise. In addition, and perhaps as a tactic to scare universities that might be inclined to engage in gaming, the ARC has introduced further rigour to the exercise by asserting its right to audit submissions. This is, presumably, in response to evaluation committees’ suspicions of some 2015 submissions, which resulted in ratings of “not assessed” being given in a number of cases.

While these are all noteworthy steps in the ARC’s follow-through on commitments to lofty ideals of responsibility, they are pointless without consideration of the public care factor and costs. Firstly, is the public at all interested in ERA, and do they really want to see our submissions? In truth, ERA is largely unknown outside of Australian academia and – whether we like them or not – international ranking systems have become the most recognisable worldwide tool for identifying the performance of universities. Everyone knows them, and they are also the tool most Australian universities now use to set targets for performance and monitor their own achievements. Thus, the ERA care factor is probably low. Second, against the backdrop of a cash-strapped higher education sector where much has been made lately of how Australian universities spend money (including administration costs, as alluded to in the Productivity Commission’s report, “Shifting the Dial”), how would the public feel to discover that just one round of this standalone exercise has been estimated by some in the sector to cost in excess of $100 million, much of it borne by universities themselves? And those costs are almost entirely administrative, with a high likelihood they are cross-subsidised by student fees.

To further complicate matters, the ARC has decided to follow in the footsteps of the UK’s Research Evaluation Framework by conducting a new assessment exercise this year – the Engagement and Impact Assessment. For some time, the Australian government has been strongly concerned about the benefits of university-generated research beyond academia, leading to this new assessment exercise being announced as part of its 2015 National Innovation and Science Agenda. The authors of the NISA boldly stated that an exercise of this kind will ultimately lead to improvements in how universities engage with industry. No evidence for how these improvements will supposedly occur has ever been presented and, unsurprisingly, the NISA was subsequently reproached by the National Audit Office in its independent audit of the NISA for the absence of evidence to support individual measures articulated in the agenda.

In spite of this and the many other concerns raised by universities about the burden associated with a new exercise, the Engagement and Impact Assessment is well and truly underway. During its development, in an effort to alleviate concerns about workload, the ARC committed to a largely data-focused exercise for the engagement element (which is assessed separately to impact), keeping data requirements to a minimum and aiming for reuse of data as much as possible. While this is commendable, the issue is that data offers only a limited picture and the ARC wants to hear the full story. Thus, the end result is a complex exercise comprising four pieces of narrative for each discipline:

  • an explanation of the engagement activities undertaken during the reference period which underpin the data
  • a context for the engagement data (it’s not really clear what the ARC wants to see here, but presumably it is a statement that further describes the data which would obviously make no sense to evaluators if looked at in isolation)
  • an impact study
  • a detailed statement on approaches to impact (which will probably, by nature, end up repeating much of the engagement element).

There is little doubt that participating in this arduous exercise will involve significant resources and money on the part of universities (not to mention the ARC’s own costs), with absolutely no guarantee that it will magically lead to the government’s desired end result – improved collaborations between universities, industries, and end-users of research.

Until the hidden costs of assessment exercises in Australia are revealed and a thorough consideration of their general utility is undertaken, they will remain an international curiosity, with the dubious honour of serving as an example of assessment for the sake of assessment and irresponsible use of public monies.


Ksenia Sawczak is Director of Research Services at the University of Canberra, where she oversees the preparation of data submissions to government agencies. She is also responsible for policy development and maintains a close observation of developments in the Australian government research policy landscape.

Posted by @jrostant