Sponsored policy research: getting the right balance between academic policy and its 'usefulness'

This post originally appeared on Professor Christina Boswell's personal blog, but we felt it was highly relevant in Australia particularly given the recently announced review of the ARC's Cooperative Research Centre program. Christina in a Professor of Politics at the University of Edinburgh, her research explores the use of knowledge in policymaking and politics. 

I was recently reviewing a (non-UK) government-sponsored research centre, and was struck by the tension between two goals. On the one hand, the government funders were keen to ensure the centre had solid academic credentials, and was carrying out work that was internationally recognised. On the other, it wanted the researchers to supply quite applied data and analysis to inform decision-making or debate, for example in the form of specific policy evaluations, briefings or even answering parliamentary questions.

These two types of function are, of course, very difficult to combine. Academic research worthy of its name requires a readiness to critically scrutinise concepts and assumptions employed by policy makers. It typically requires a far longer lead-in, and may focus on describing phenomena in a way that is not obviously relevant to policy, or developing generalisable claims that aren’t sufficiently specific to guide decision-making on particular policy problems. Often, rather than supplying neat answers, such research raises more questions than it answers. At a more practical level, in order to attract and retain good researchers, such government-sponsored research bodies need to be able to provide good possibilities for academic recognition through publication, tenure contracts and some degree of academic freedom. Good researchers won’t want to stick around long if they are confined to producing briefing papers and internal reports.

By contrast, officials in government departments often require quite different types of analysis. Sometimes they are looking for descriptive data or forecasts (and that can indeed tally with academic research); but often they require more specific evaluations; or quite detailed information on ‘good practice’ elsewhere; or very precise and delimited types of information about target populations. These types of data or analysis are often best provided through commissioned studies or reports, rather than a centre or unit attached to or based in the department.

My own research on inhouse research units in interior ministries suggested that these were largely there to signal the capacity of the department to make well-founded decisions. They were about meeting expectations about ‘evidence-based policymaking’ or ‘Competenz’, which implied the importance of demonstrating expertise. But most  officials saw the work of the research unit as largely irrelevant to the knowledge requirements of the operational parts of the organisation. I simplify somewhat, but this was certainly the pattern in Germany, the UK and the European Commission. Indeed, there was almost a trade-off between the conditions which would produce academic quality and those designed to ensure ‘usefulness’.

How do organisations cope with this trade-off? In the German Federal Office for Migration, managers prioritised academic reputation – the Competenz component – over usefulness. It was of paramount importance to senior management to signal the epistemic authority of the Federal Office. The UK Home Office appeared to fluctuate between the two models, with cycles of prioritising ‘blue skies’ research followed by frustrated attempts to ‘embed’ researchers within operations.

Unfortunately, the centre I was reviewing didn’t seem to be ticking either of these boxes. The research wasn’t of a high academic quality, partly because the researchers were so busy trying to second guess what might be useful to policymakers. But neither did officials show much interest in its work, ignoring repeated requests to help define research projects – presumably because they had limited need for academic research, or had other experts they preferred to turn to. In this case, the attempt to meet two quite different goals appeared to have ended up satisfying neither.

The lesson from all this seems to be that government funded research centres or units are not necessarily there primarily to address what the organisation identifies as research gaps. This type of ‘management information’ can be supplied through other means. Rather, such units are there to carry out more academic research, in order to lend authority to the organisation. At best, such research might have a longer-term enlightenment function, influencing how policy problems are framed. But that’s only likely to happen if you give the researchers sufficient academic freedom.