A key tenet of the Australian Government's National Innovation and Science Agenda is that public research funding should be awarded based on industry collaboration. A/Prof Michael Charles of Southern Cross University and our new Policy Whisperer Prof Robyn Keast ask: how can collaborative research excellence can be measured across a variety of very different disciplines? And if it can be measured appropriately, how can academic culture both within universities and at the grant-awarding level be changed to facilitate this transition?
In his recent National Innovation and Science Agenda statement, Prime Minister Malcom Turnbull has suggested that Australian universities need to collaborate and engage more with industry if Australia is to realise its innovation potential: “Increasing collaboration between businesses, universities and the research sector is absolutely critical for our businesses to remain competitive”. A core message is that academics need to spend less time writing article for peer-reviewed scientific journals and more time “engaging with and cooperating with” industry.
The Prime Minister went on to say that, in the future, public research funding should be awarded based on industry collaboration. He questioned the current measures of academic excellence, which are based on having work cited by other academics in academic publications. In principle, moving from a culture of writing for a small, closed audience to working closely with industry and other end-users of research makes sense. But the question must be asked: how can collaborative research excellence be measured across a variety of disciplines? And if it can be measured appropriately, how can the academic culture both within universities and at the grant-awarding level be changed to facilitate this transition?
At the centre of a call for enhanced innovation is the notion that ‘collaboration’ is always good. Yet there is strong research to suggest that a lot of collaboration is far from useful. In fact, it can be a drain on an organization’s resources that doesn’t produce anything that anyone really values (Lee and Bozeman, 2005). At its worst, the resources expended on an unsuccessful collaboration could have been used to facilitate another more productive relationship. In short, the opportunity costs associated with unsuccessful collaborations are potentially enormous, especially when it is estimated that between 50-80% of R&D collaborations fail (Faems, 2006; Kelly, Schaan and Joncas, 2002).
And it’s not just academics who sometimes don’t do collaboration very well. Schemes such as the Cooperative Research Centre (CRC) programme have, in many cases, resulted in world-class collaboration between industry and academia that has produced valuable new products or processes, or has informed the development of policy. But there are also examples where the results have been less than stellar. As a number of studies have shown, this failure in authentic academic/industry collaboration is particularly notable where industry partners don’t actively participate in the innovation process, or have failed to supply key data or access to equipment because of commercial concerns – especially where research centre members are in competition with each other! In some cases, without a central administrative body to negotiate between the various collaborating parties, the transaction costs of maintaining the relationship are simply too high, and the relationship between industry and academia cannot be sustained without continued public funding (Sinnewe, Charles and Keast, 2016).
To move the Innovation agenda forward, new ways to measure the quality of collaboration must be found before all academics who have engaged with industry receive a tick on their funding applications. One of the Agenda’s accompanying fact sheets states that “the Government will work with the higher education research sector, industry and other end-users of research to develop quantitative and qualitative measures of impact and engagement” (Australian Government, 2015). But the search for more appropriate metrics brings with it the concern that not all the outcomes of collaborative research are immediately recognizable, and many have long-term gestations. Different industry partners will have different timeframes for the application of knowledge generated from collaborative research. In fact, some of the most important outcomes might take several years, or even a decade or more, to deliver on their initial promise.
While some academics might feel that they are able to present tangible commercial outcomes for their research collaborations, largely because of the nature of their work, others will struggle to do so. The difference between STEM and humanities outputs provides a useful contrast here. Questions of fairness must therefore be raised, particularly since the former academics might simply be facilitating incremental change with short-term outcomes, while the latter are pursuing transformational changes with potential long-term benefits to industry and society. Furthermore, groundbreaking or paradigm-shifting research does not always come out of those disciplines that have been traditionally aligned with industry. Do we only want to see a research landscape where research dollars are poured into to developing more cost-effective mining techniques at the expense of funding research into social issues such as domestic violence, indigenous health or special needs education, simply because the former researchers know how to measure and quantify their impacts better?
Finally, shifting academic culture from concentrating on peer-reviewed research outputs to truly genuine or authentic collaborative research with high-level industry and social impacts is not going to happen overnight. Collaboration takes time, focused effort and draws on a sets of skills and competencies that are not always present or rewarded in organisations (Keast and Mandell, 2014). Even if a way to measure collaborative research excellence fairly and equitably across a variety of disciplines and subject areas can be established (and this will not be easy), this will all come to naught if grant assessors, such as those working with the Australian Research Council, still privilege the number of highly-ranked academic journal articles over the amount of successful industry-relevant research outcomes.
If the Prime Minister really wants collaboration to drive innovation, it must be understood that this won’t just happen, and that determining how to ensure it will happen won’t be easy. Government will also need to ensure the deeper inclusion of parties outside academia in the decision-making processes when it comes to funding research.
Australian Government. 2015. Measuring Impact and Engagement of University Research.
Lee, S. and Bozeman, B. 2005. The Impact of Research Collaboration on Scientific Productivity. Social Sciences Studies 35(5): 673-702.
Faems, D. 2006. Collaboration for Innovation: Processes of Governance and Learning in R&D Alliances, PhD thesis Katholieke universiteit leuven, Belgium.
Keast, R.L. and Mandell, M. 2014. The Collaborative Push: Moving beyond rhetoric and Gaining Evidence. Journal of Management & Governance, 18(1): 9-28.
Kelly, M., Schann, J-L and Joncas, H. 2002. Managing Alliance Relationships: Key Challenges in Early Stages of Collaboration. R&D Management 32(1): 11-22.
Sinnewe, E., Charles, M.B. and Keast, R. 2016. Australia’s Cooperative Research Centre Program: A Transaction Cost Theory Perspective. Research Policy 45(1): 195–204.
Posted by @MsSophieRae