The Rough Magic of Engagement Measurement

By Dr Tim Cahill and Professor Julian Meyrick

‘In God we trust.  All others bring data,’ quipped US statistician, W. Edwards Deeming.  As he implied, measurement is an inherently conservative occupation.  Units of appraisal have to be agreed in advance, while the aim of measuring something is usually to compare it with something that already exists.

Then there is the problem of the real-world relations to which numbers are supposed to correspond. As Australian universities pour over their ERA results, they may be wondering how an entire spectrum of research effort has been squeezed into a cardinal scale of 1 to 5. 

Here, measurement is at best rough magic. What makes it meaningful are the good faith intentions behind the evaluative process

The more qualitative a phenomenon, the more the correspondence with quantitative indicators breaks down. Here, measurement is at best rough magic. What makes it meaningful is the good faith intentions behind the evaluative process.  Many things elude exact computation.  But the effort we make to count them brings a degree of insight, and sometimes important change.

It is for this reason that the new assessment of ‘research engagement’ currently under consideration by the ARC is to be welcomed, albeit with caveats.

The government’s proposal to dilute the link between university research funding and peer reviewed publications has been greeted with initial gloom, especially by researchers in the Humanities, Arts and Social Sciences (HASS), who some see as dependent on this research output.

 But actually HASS has no particular claim on that score. Based on the latest ERA data, these disciplines publish less than the medical and health sciences, biological sciences, information and computing sciences, and engineering. Three of the five smallest 2-digit disciplines as measured by publications are HASS. 

A bigger problem is universities’ reliance on the three-year grant cycle, which is antithetical to the kind of long-term, enquiry-led research HASS is good at. Under our dual funding system, not only is around 50% of support delivered via large ARC and NHMRC grants, but these are also the principal inputs into the research block grant funding formulae – a legacy of the Dawkins reforms of the 1980s.

Meanwhile, income generated from working with public, private and not-for-profit sector partners, including income in kind, barely rates a mention.

These formulae hard-wire in certain academic practices, forcing researchers to focus on large grant-getting – which is predicated on journal articles to boost track records – at the expense of other activities. Are peer-reviewed academic articles the acme of research excellence?  The reality is surely more complicated. 

For HASS the fit is poor because its research is often not well suited to large grant-getting. While STEM can spend endlessly on research assistants and specialised equipment, the majority of HASS costs are for teaching relief and travel. HASS researchers face a system of incentives that does not meet their real needs.  What they need is time and space to engage a diversity of problems in a plurality of ways.

To derive maximum value from publicly funded research requires two conditions.  First, it must be accessible, linguistically and physically.  Second, there must be a user capable of deriving value by applying it.  In many cases, this may mean writing an academic journal article. But a monocular focus on producing articles aimed at other academics is rife with assumptions antithetical to unlocking research value, including the hazard of impenetrable language, and the profit motives of academic publishers who lock knowledge away behind steep pay walls.

Professor Cameron Neylon addressed just this problem in a recent speech for Open Access Week when he argued


Knowledge is not a public good.  It is a club good.  [It]… is excludable by the simple expedient of not telling anyone else.  Through communication and dissemination we make it less exclusive… but we can never entirely eliminate exclusion.  We can only ever invest in reducing it… [This] recentres the question of how best to invest limited resources in making things more open, more public and less exclusive.  Which audiences should we target?  How best to enable unexpected contributions?  How to maximize network benefits?


In the creative arts disciplines, home of the non-traditional output, these issues are well understood.  The engagement agenda is potentially a way of recognising a broad range of research equivalent activities, many of which have sizeable audiences and impact. Architecture and design are the obvious areas in which community outcomes do not typically correspond to countable outputs in a way that promotes maximum social benefit. But the cultural industries as a whole – museums, theatres, galleries, cinema and so on – are all based on the idea of broad public engagement.

A rebalancing of funding to provide more recognition of income derived from public, private and not-for-profit sectors, plus delivering a larger portion of funding through block grants, would go a long way to democratising university research.

For the engagement agenda to work requires we do not use it solely as a synonym for commercialisation. It must be considered in the wider context of all the social, cultural, economic and environmental benefits it generates

But there is a caveat.  For the engagement agenda to work requires we do not use it solely as a synonym for commercialisation.  It must be considered in the wider context of all the social, cultural, economic and environmental benefits it generates. This will be hard in a climate where the bottom line is increasingly the only line, and governments are focused on short-term economic factors to the exclusion of almost everything else. 

Yet it ought not to hold up consideration of the overall aim of research that has for too long seen HASS measure itself in ways that do not reflect its original contributions.

Engagement is potentially not only an expansive but a progressive move. The challenge is how to ensure it is not used as a ‘thin’ indicator or as a proxy for business savvy.


Dr Tim Cahill is Chief Data Scientist with The Conversation, Director, Research Strategies Australia and an Adjunct Research Fellow at Swinburne University of Technology.  Professor Julian Meyrick is Strategic Professor of Creative Arts at Flinders University and Artistic Counsel, State Theatre Company of South Australia.