Hostname: page-component-7c8c6479df-27gpq Total loading time: 0 Render date: 2024-03-27T11:41:32.142Z Has data issue: false hasContentIssue false

We Need to Talk about Impact: Why Social Policy Academics need to Engage with the UK's Research Impact Agenda

Published online by Cambridge University Press:  16 May 2016

KATHERINE E. SMITH
Affiliation:
Global Public Health Unit, School of Social & Political Science, University of Edinburgh, Edinburgh, United Kingdom email: Katherine.smith@ed.ac.uk
ELLEN STEWART
Affiliation:
Centre for Population Health Sciences, Usher Institute, Edinburgh Medical School: Molecular, Genetic & Population Health Sciences, University of Edinburgh. email: E.Stewart@ed.ac.uk
Rights & Permissions [Opens in a new window]

Abstract

Of all the social sciences, social policy is one of the most obviously policy-orientated. One might, therefore, expect a research and funding agenda which prioritises and rewards policy relevance to garner an enthusiastic response among social policy scholars. Yet, the social policy response to the way in which major funders and the Research Excellence Framework (REF) are now prioritising ‘impact’ has been remarkably muted. Elsewhere in the social sciences, ‘research impact’ is being widely debated and a wealth of concerns about the way in which this agenda is being pursued are being articulated. Here, we argue there is an urgent need for social policy academics to join this debate. First, we employ interviews with academics involved in health inequalities research, undertaken between 2004 and 2015, to explore perceptions, and experiences, of the ‘impact agenda’ (an analysis which is informed by a review of guidelines for assessing ‘impact’ and relevant academic literature). Next, we analyse high- and low-scoring REF2014 impact case studies to assess whether these concerns appear justified. We conclude by outlining how social policy expertise might usefully contribute to efforts to encourage, measure and reward research ‘impact’.

Type
Articles
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
Copyright © Cambridge University Press 2016

Introduction

Social policy is a diverse, inchoate discipline but consistent features of recent definitions include an applied, policy-orientated focus, a desire to make a positive impact on society and a commitment to engaging beyond academia (e.g. Alcock, Reference Alcock, Alcock, May and Wright2012; Social Policy Association, 2009). This is not merely a matter of self-perception; recent research on the external impact of the social sciences identifies social policy researchers as among the most ‘externally mentioned’ (Bastow et al., Reference Bastow, Dunleavy and Tinkler2014: 56). The increasing measurement and reward of ‘research impact’ could therefore be seen as a welcome opportunity for social policy scholars. However, while social policy often seeks to change the world around us, this can involve critique and opposition, which sit awkwardly alongside current formulations of research impact. Yet, in contrast to the heated debates and analysis on the ‘impact agenda’ evident in the journals of other disciplines, notably geography (e.g. Pain et al., Reference Pain, Kesby and Askins2011; Slater, Reference Slater2012), a literature search for ‘research impact’ and ‘social policy’ yields only a few passing mentions (a recent exception is a debate piece by Warren and Garthwaite, Reference Warren and Garthwaite2015). We argue that a discipline explicitly founded on the pursuit of real world improvements should at least be debating an ostensible shift towards recognition of our preferred mode of working. This engagement need not be uncritical; we simply need to talk about impact.

The emphasis on research impact has been increasing steadily in the UK since the late-1990s (Cabinet Office, 1999), intensifying in a context of growing frustration that, despite apparently mutual political and academic interest in strengthening the links between research and policy, the evidence-base of many policies remains limited (e.g. Katikireddi et al., Reference Katikireddi, Higgins, Bond, Bonell and Macintyre2011; Naughton, Reference Naughton2005). Research impact now forms a significant section of grant application processes for major UK funding councils, while the recent national appraisal of university research, REF2014, awarded 20 per cent of overall scores to institutions on the basis of impact case studies. In other words, obtaining core research funding (largely distributed on the basis of REF scores) and project-specific research funding in the UK are now both strongly dependent on researchers’ abilities to respond adequately to questions about the broader (non-academic) value of their work.

Although this article focuses primarily on the situation in the UK, the growing interest in research impact is evident internationally. REF2014 was closely informed by an approach for assessing ‘impact’ that was trialled (though never implemented) in Australia (Penfield et al., Reference Penfield, Baker, Scoble and Wykes2014). Interest in research utilisation is also particularly high in Canada (e.g. Lomas, Reference Lomas2000), where major funders of health research are placing a strong emphasis on research utilisation (Tetroe, Reference Tetroe2007) and leading universities are employing ‘knowledge brokers’ to promote better links between research and the wider world (Phipps, Reference Phipps2011). Similarly, in the Netherlands, there has been recent investment in ‘boundary organisations’, which sit between research and policy and are intended to facilitate research use (e.g. Bekker et al., Reference Bekker, van Egmond, Wehrens, Putters and Bal2010; Van Egmond et al., Reference Van Egmond, Bekker, Bal and van der Grinten2011). Meanwhile, in the USA, an increasing emphasis is being placed on assessing ‘societal impacts’ in peer reviews of grant applications (Holbrook and Frodeman, Reference Holbrook and Frodeman2011). In this context, the UK's efforts to formally incentivise, monitor and reward research impact, and the consequences of these efforts for academics and the work they produce, is of international relevance and interest.

Defining the UK's ‘research impact’ agenda is difficult but the following features are common in documentary guidance from REF2014 and the Research Councils (AHRC, 2014; ERSC, 2014a, 2014b; MRC, 2013, 2014; REF, 2011, 2012; Research Councils UK, 2011, 2014): (i) a consensus that researchers should be able to articulate the impact of their research beyond academia; (ii) an assumption (sometimes implicit, sometimes explicit) that this impact will/should be positive (although ‘for whom’ appears to be an open question); and (iii) a belief that the distribution of research funding should (at least to some extent) reflect researchers’ ability to achieve ‘impact’. These changes have been welcomed by some (e.g. London School of Economics Public Policy Group, 2011; Pain et al., Reference Pain, Kesby and Askins2011), while others have raised concerns about the extent to which they are constraining and politicising research in the interests of short-term policy goals (e.g. Back, Reference Back2015; Hammersley Reference Hammersley2014; Slater, Reference Slater2012). Those who appear supportive of the ‘impact agenda’ suggest that, while the current organisational structures may be flawed, the overarching agenda can enable the co-production of knowledge with local communities (e.g. Pain et al., Reference Pain, Kesby and Askins2011). Others, however, argue that the emphasis on ‘policy relevance’ constrains the ability of academics to undertake critical, theoretical and longer-term research (e.g. Slater, Reference Slater2012).

It would be difficult to comprehensively assess how the UK's ‘impact agenda’ is influencing the day-to-day activities, and research outputs, of the numerous, diverse academics working across Social Policy in the UK. Therefore, less ambitiously, this paper considers how researchers working on one, much-debated, policy issue (health inequalities) reported feeling about, and responding to, the ‘impact agenda’. These interviews were conducted between 2004 and 2015 (i.e. the period over which the current research ‘impact agenda’ has been evolving) and therefore provide insights into how academic practices appear to be changing in response to ‘impact’ incentives. Where relevant, this analysis is supplemented with reference to the REF2014 panel criteria and working methods (HEFCE, 2012) and broader academic literature. We then briefly review 25 high- and low-scoring impact case studies for ‘Social Work and Social Policy’ in REF2014, with a view to assessing whether the concerns raised here appear warranted. Finally, we make a case for the application of Social Policy expertise to enhance approaches to achieving, measuring and rewarding research impact.

Methods

The case study employed in this paper involved 147 semi-structured interviews with individuals involved in research, policy and advocacy around health inequalities in the UK, undertaken between 2004 and 2015. This article focuses largely on the 52 interviews with academic researchers, around a quarter of whom were currently, or had recently been, based in social policy units and most of whom reported being entered into Research Assessment Exercise (RAE) 2008 Unit of Assessment 40 (social work and social policy & administration) and/or REF2014 Unit of Assessment 22 (social work and social policy). For the most part, the data we employ in this paper are taken from the more recent batch of interviews (undertaken between 2010 and 2015), since these interviews incorporated explicit discussion of the ‘impact agenda’ but we use a small number of extracts from earlier interviews (as signalled in the text) to comment on changes and similarities over time. The majority of interviews took place in a private room where, for the duration of the interview, only the interviewee and the researcher were present. A themed interview schedule was employed which focused on the relationship between health inequalities evidence and policy and included several questions on knowledge exchange and research impact (these questions were more specific and detailed in later interviews, reflecting the changing academic context). The interviews lasted between 40 and 150 minutes. The research was conducted in line with the University of Edinburgh's ethical guidelines. All interviews were digitally recorded and transcribed verbatim, before being anonymised. The lead researcher then thematically coded the transcripts, initially using the qualitative data analysis programme, Atlas.ti, and later NVivo, using a coding framework that was developed iteratively, via analysis and re-analysis of the transcripts. This process helped identify a range of perspectives on incentives, frameworks and guidance relating to research impact and knowledge exchange.

This case study is supported by a desk-based search (undertaken in summer 2013 and updated in October 2015) of key sources of guidance for achieving and assessing research impact (AHRC, 2014; ERSC, 2014a, 2014b; MRC, 2013, 2014; REF, 2011, 2012; Research Councils UK, 2011, 2014). We also assessed 25 ‘high’ and ‘low’ scoring impact case studies from REF2014 unit of assessment 22 (‘Social Work and Social Policy’) to consider whether these examples support the concerns identified in the first part of the paper. As impact case studies are collectively given an institutional score, we selected case studies from institutions where 100 per cent of the case studies were graded 4* (‘high scoring’) or where 100 per cent were graded 2* or less (‘low scoring’), using the results posted on the official REF2014 site (http://results.ref.ac.uk/Results/ByUoa/22/Impact). The lead author identified the relevant case studies, constructed a table summarizing the concerns identified by interviewees (see online supplementary material), and completed an initial analysis of the case studies using these questions. The second author then checked the initial selection and replicated the analysis on a random selection of 12 case studies. Due to the high degree of similarity in this analysis, we did not deem it necessary to cross-check the other 13 case studies.

Findings: academic perspectives on the impact agenda

Despite the fact that all academic interviewees expressed some desire to influence policy or practice, their perceptions of the ‘impact agenda’ were not wholly positive and instead reflect the mixed response evident in published literature outlined above. Perhaps unsurprisingly, those academics who already worked closely with policymakers were most supportive of ‘impact’, with many suggesting that researchers working on applied topics like health inequalities had a responsibility to engage with policymakers. In the earlier batch of interviews, this group's concerns centred on a wariness as to whether the emerging impact agenda would sufficiently reward and incentivise this kind of policy collaboration. There was, however, a notable change in the views presented in more recent interviews (2010–2015), with most academic interviewees indicating that they did now feel supported in policy-orientated work and particularly appreciated being able to access resources for outward-facing work. In this sense, it would seem that the UK's research ‘impact agenda’ is succeeding: indeed, several of the academics interviewed more recently described working in institutional settings which would no longer accommodate academics who were not willing to engage with audiences beyond academia.

Nonetheless, although most of our interviewees saw benefits in the UK's ‘impact agenda’, most also identified concerns. In the following sections, we consider the 10 most common concerns that interviewees expressed. These are divided into three broad concerns: (1) ‘bad’ impact will be rewarded/encouraged; (2) impact is difficult to trace and reward; and (3) the ‘impact agenda’ is constraining the kind of work academics in the UK are able to undertake. Where relevant, we also refer to the various guidance documents we assessed and to other publications concerning research impact.

(1) Will the research impact agenda reward ‘bad’ impact?Footnote 1

The concerns in this section reflect observations that there are occasions on which the use of research may, for various reasons, be undesirable (Greenhalgh et al., Reference Greenhalgh, Robert, MacFarlane, Bate and Kyriakidou2004). Clearly the question of ‘good’ or ‘bad’ impact is highly subjective, and impacts may be perceived differently over time (Penfield et al., Reference Penfield, Baker, Scoble and Wykes2014) but acknowledging the potential for research findings to have harmful consequences raises questions about the kinds of ‘impact’ we want to reward.

(i) Concerns about the quality and influence of single studies

One concern that has been raised repeatedly in discussions about promoting efforts to disseminate research beyond academia concerns the quality, validity and generalisability of research. An early critique of official commitments to evidence-based policy in the UK warned that, generally, ‘the results of a single study are not worth disseminating’ (Black, Reference Black2001: 278) and it seemed clear in interviews that this is something public health researchers are very concerned about:

Academic: What really annoys me is seeing. . . single, trivial studies being disseminated really widely through the media and. . . the findings are meaningless, the study's small, it's a case-controlled study. . . ripe with bias or whatever [. . .] it should never have been reported, the authors shouldn't have. . . run to the newspapers with their interviews.

Yet, the guidance provided by the UK Research Councils (ESRC, 2014a, 2014b; AHRC, 2014; MRC, 2013, 2014) and the REF2014 assessment criteria (REF, 2011, 2012) all appear to encourage academics to work to promote the impact of single studies (or, in the case of the REF, collections of studies by the same group of researchers). Several interviewees suggested that these criteria might encourage academics to promote the work of studies regardless of quality (a concern that was also expressed by civil servants in the broader interview data). For example:

Academic: I think the whole REF [impact] thing [is] going to give rise to less careful work [. . .] and I think that's really problematic. [. . .] I just feel that science has lost its place somewhere, science and truth and objectivity.

In theory, the research councils give applicants for funding the option to say that they do not think their research findings will (or should) result in research impact (a point most of the research funders interviewed in this study were keen to stress). Yet, sociological studies of researchers consistently demonstrate that, when applying for funding, they tend to do everything possible to maximise their chances of success (Knorr-Cetina, Reference Knorr-Cetina1981). In other words, if a funding process appears to reward commitments to knowledge dissemination strategies and ‘pathways to impact’, there is inevitably a temptation to articulate these kinds of commitments in a grant application, regardless of how appropriate a researcher may perceive them to be. Moreover, as the following academic reflected, researchers are often not best-placed to judge the potential utility of their own research:

Academic: Everything you know about researchers and about the research process is that, actually, they're the last people you would trust to give an objective overview of what matters in their own research. So that's why I always slightly cringe whenever I have to fill in the knowledge translation bit on any grant applications, because I think well, [. . .] is it going to be knowledge that is worth translating?

To some extent, the peer-review process for research grant applications ought to guard against researchers achieving credit for the potential impact of poor quality research, and the ESRC boldly (if questionably) states ‘research must be of the highest quality: you can't have impact without excellence’ (ESRC, 2014b, p1). REF2014 guidance for assessing impact sets a threshold of 2* quality for impact case studies (REF, 2011) which means that, to qualify as a potential ‘impact’ case study, research outputs had, in theory, to demonstrate a thorough and professional application of appropriate research design, investigation and analysis, plus the potential for providing valuable, incremental advances in knowledge in the field. However, as the quotations above indicate, several interviewees suggested that they were not convinced that the quality threshold for ‘impact’ was sufficiently high. In this context, it is worth noting that an evaluation of the REF2014 impact assessment, commissioned by the UK higher education bodies and undertaken by RAND (Manville et al., Reference Manville, Morgan Jones, Frearson, Castle-Clarke, Henham, Gunashekar and Gran2014), identified concerns among some assessors about their ability to assess whether the research underlying impact case studies met the 2* threshold.

(ii) Rewarding misinterpretation?

Six health inequalities researchers recounted being bewildered by journalistic, political and policy interpretations of their work. For example:

Academic: [Another researcher] and I wrote a paper on why [particular social group] have more [of a particular disease risk factor] and. . . in order to satisfy our referees [. . .] we put a little sentence in the final paragraph saying, ‘well, of course. . . what we haven't ruled out is genetic causes,’ and so the media picked this up: ‘[Particular group is] born to die’, ‘heart disease [in social group] due to genetics. . .’ Unbelievable, un-be-lievable!

Similarly, one interviewee described feeling that an advocacy group had deliberately reframed the implications of their research, despite their protestations (see also Dagnino, Reference Dagnino and Cornwall2007). This poses a potential dilemma, which two interviewees described facing in the context of developing potential impact case studies for REF2014: should researchers be rewarded (or seek reward) for ‘impact’ if they feel their work has been misinterpreted (or interpreted in ways other than they intended)? The current REF impact guidance provides very little advice about this, appearing to assume that research that achieves impact necessarily does so in a manner that the original researchers intended.

(iii) Rewarding symbolic research use over more substantive but complex contributions

The ‘impact agenda’ focuses almost exclusively on ‘instrumental’ research use (i.e. to solve problems, or achieve change, in a direct sense), with some acknowledgement of ‘conceptual’ use (changing the ways in which an issue is thought about). This approach ignores alternative uses of research that may be less desirable, such as ‘symbolic’ use, where research is used to legitimise existing decisions/positions (Boswell, Reference Boswell2009; Weiss, Reference Weiss1979). This may result in research being cited to support particular policy decisions when the research in question has not, in reality, directly informed those decisions. For example, after the election of a New Labour government in 1997, a small number of very senior researchers were involved in the government-commissioned inquiry into health inequalities, the Acheson Report (Acheson, 1998), which reviewed available evidence to make a series of evidence-informed policy recommendations. The Acheson Report, which was strongly informed by the research outputs of the researchers involved, was subsequently cited in a wide range of policy documents across the UK. Taken at face value, this kind of high-profile ‘impact paper trail’ could look like a very convincing example of research impact. Yet a politician who held a ministerial post with responsibility for health inequalities in this period reflected:

If I'm truthful, [the Acheson Report]. . . [pause] had much more impact on other people than it ever did on me. [. . .] I mean most of it was sort of confirmation. We'd have done most of what we did whether Acheson had done his Report or not but we'd said that we would have a new Black Report and we did. [. . .] I mean if they [academic researchers] help back up what we were doing, fine but. . . I don't think that they were a source of policy.

The above quotation suggests that the apparently ‘instrumental’ impact documented via multiple policy citations was, in reality, more ‘symbolic’. The broader interview data provide yet another perspective, pointing to a more gradual influence of research-informed ideas about health inequalities on Labour politicians’ thinking whilst the Party was in opposition (see Smith, Reference Smith2013 for more detail). The ways in which interviewees described these exchanges evoke Weiss’ observation that it ‘is not usually a single finding or the recommendation derived from a single study that is adopted in executive of legislative action’ but rather ‘generalizations and ideas from a number of studies [that] come into currency indirectly’ (Weiss, Reference Weiss1982: 622); what Weiss terms the ‘enlightenment’ function of research. From this perspective, identifying specific studies that influenced Labour ministers’ thinking about health inequalities would be extremely difficult but it seems clear that the ‘impact paper trail’ provided by the Acheson Report constitutes a narrow, simplified and chronologically challenged version of the way in which a much larger body of research informed political thinking.

(iv) Ethical dimensions of research impact

Three researchers noted the potential for their research to be interpreted and used in ways which could be deemed unethical. One example concerned a researcher who noted that their research was being used by alcohol companies in ways they had not anticipated; another concerned a researcher who felt their findings were being used by policymakers and journalists to reinforce existing prejudices towards a vulnerable population group. In a more generic way, a third interviewee noted:

Academic: ‘There clearly is such a thing as undesirable impact, but there is nothing institutionally in the system to divide between desirable and undesirable impact - there is just ‘impact’.

This raises the possibility that researchers or institutions could (at least in theory) be rewarded for research impact that they (or others) might consider to be in some sense unethical. Our review of available guidance and literature identified almost no consideration of the ethics of impact. The only two exceptions we are aware of are a blog piece on the ‘philosophy of impact’ (Briggle et al., Reference Briggle, Frodeman and Holbrook2015) and a seminar discussion led by Rothman (Rothman, Reference Rothman2015) which encouraged academics to consider that there may be circumstances in which it is not ethical to promote research findings beyond academia. Yet, the presumption in most of the available guidance seems to be that research impact is a ‘good’ in and of itself; no questions are raised about the ethics of who benefits and how or whether, crucially, the impacts might also entail potential harms (to individuals, groups or the environment).

(2) Tracing and rewarding research impact

The next three concerns identified within the interview data reflect the difficulties involved in tracing, and attributing rewards for, research impact and were often articulated by interviewees who were otherwise supportive of the ‘impact agenda’.

(v) Attributing reward for the impact of research syntheses

One popular response to concerns about the potential undesirability of promoting single studies beyond academia (discussed earlier), particularly in health research, has been to call for efforts to promote syntheses of evidence, such as systematic reviews (e.g. Black, Reference Black2001; Lavis et al., Reference Lavis, Davies, Oxman, Denis, Golden-Biddle and Ferlie2004). Several examples of systematic reviews being cited in policy debates were provided by interviewees and, in these cases, interviewees seemed to suggest that credit for impact ought to be attributed to the author(s) of the review(s) (a finding which seems in line with REF2014 guidance but which contrasts somewhat with the findings of Manville et al.’s Reference Manville, Morgan Jones, Frearson, Castle-Clarke, Henham, Gunashekar and Gran2014 evaluation, in which some participants emphasised the importance of acknowledging the key studies cited within reviews). However, none of our interviewees said that they had yet developed an impact case study around a systematic review and one interviewee reported being unable to persuade a senior colleague involved in collating his (social policy) department's REF submission (for the social work and social policy panel) that a systematic review constituted a valid piece of research. Instead, the data consistently highlight a concern (discussed above) that the current impact architecture is encouraging researchers to promote the findings of single studies, rather than contributing to improving the influence of available evidence more broadly.

(vi) Distinguishing facts from fables

Research that influences policy via Weiss’ (Reference Weiss1979; Reference Weiss1982) ‘enlightenment’ function can potentially lead to significant policy shifts but it is also extremely difficult to attribute credit since a multitude of studies by different researchers may be involved and it can be hard to judge the impact of any one study cited in debates. Several interviewees, including the speaker below, openly reflected on this difficulty:

Academic: Our research has been cited in the House of Lords during the debate on the [left blank for anonymity] Bill so then you can go and get Hansard [a record of UK Parliamentary debates] and you can say, ‘Well, it was discussed’. You can't show that that changed the way people voted or whether or not that had an influence on that Bill being passed or not. So I think you can only go as far as Hansard, you can't go further. [. . .] And if [. . .] anybody's research is written up in a newspaper, who knows who's read it, and who knows what impact it's had?

Yet, both the REF impact case studies and grant applications require academics to make relatively bold statements about the impact of their research:

Academic: Has [the impact agenda] had an impact? It's made people think about it more, it's made people lie more convincingly on grant applications maybe. We all do it to a degree, you know, a work of fiction is what you're going to do. . .

Moreover, as the example of the Acheson Report, described above, illustrates, official policy documents may present a simplified account of the use of research. The consequence of this, as several interviewees noted, is that it may end up being easier to demonstrate research use that has, in reality, been symbolic than to demonstrate more meaningful kinds of influence; a tension depicted in Figure 1.

Figure 1. ‘Impact ladder’ – significance versus demonstrability

(vii) A question of timing

An additional problem with assessing research impact relates to the long time periods that can be required for research to achieve policy/public traction; an issue raised repeatedly in debates about ‘impact’ (e.g. Pain et al., Reference Pain, Kesby and Askins2011; Slater, Reference Slater2012). Reflecting some of these concerns, REF2014 allowed impact case studies to be based on research outputs published 15 to 20 years previously (precise time-boundaries varied by unit of assessment). This nonetheless created an artificial cut-off point for researchers to demonstrate long-term, but potentially significant, impact. In the case of health inequalities, the government-commissioned Black Report (Black et al., Reference Black, Morris, Smith and Townsend1980) remains one of the most widely cited reviews of health inequalities research (within the UK and abroad) but (for political reasons) it was not regularly mentioned in policy until the election of New Labour in 1997. Using the REF2014 impact guidance, references to the Black Report in early New Labour documents might just have occurred within the required time-parameters but many of the key studies upon which the Black Report drew would have fallen outside this cut-off period. This is important because, if the time-period is too restrictive, the REF impact system runs the risk of discouraging research which addresses longer-term social and policy concerns, as the following interviewee suggested:

Academic: I think. . . [there should be] some kind of detachment from. . . the policy agenda [instead] and. . . having independent research. . . taken as a long-term investment, rather than a short-term solution to particular policy questions.

When (as several interviewees pointed out) we consider that some of the most serious policy issues currently facing the world are long-term in nature (e.g. climate change, food and water security), this seems particularly problematic.

(3) The practical implications of the research impact agenda

The final three concerns relate to the practical implications of the UK's impact agenda (the first was identified by interviewees who were generally supportive of the ‘impact agenda’; the other two were identified by more critical voices).

(viii) The risks of overloading policy audiences

Although there is no clear consensus as to precisely the best combination of knowledge-exchange activities for achieving research impact, one approach that is consistently supported by empirical research is to develop ongoing relationships with potential research users, involving them from the start of projects and maintaining interactions throughout (e.g. Contandriopoulos et al., Reference Contandriopoulos, Lemire, Denis and Tremblay2010; Innvær et al., Reference Innvær, Vist, Trommald and Oxman2002; Greenhalgh et al., Reference Greenhalgh, Robert, MacFarlane, Bate and Kyriakidou2004), including through ‘co-production’. For social policy researchers, policymakers are likely to be the key group of potential research ‘users’. Yet, policymakers regularly report struggling to process unmanageable levels of information (Institute for Government, 2011) and one recent assessment of policymakers’ views of efforts to promote the use of research in policy suggests that they value mechanisms for synthesizing research which limit their time-input (Stewart and Smith, Reference Stewart and Smith2015). This tension was articulated by several interviewees. For example:

Academic: There is a total inconsistency here. We go around, and have done for years now, saying there's too much evidence and that we should be filtering it and systematically reviewing it. And at the same time we're saying here [with the impact agenda], what we should also be doing is finding more effective ways of pushing even more of the stuff at policymakers.

Yet, much of the guidance on achieving research impact provided by major UK research funders encourages academics to try to increase ‘the flow’ of research beyond academia (e.g. AHRC, 2015) and to try to involve potential research-users in projects at all stages of research (e.g. British Academy, 2008; ESRC, 2014c).

(ix) Reifying traditional ‘elites’

Many of the interviewees (including those who worked in policy) suggested that, when it comes to providing policy advice, a small number of senior academics tend to occupy privileged positions. It seems inevitable that there will be greater opportunities for achieving research impact if you occupy one of these privileged positions so it is important to acknowledge, from an equity perspective, that all such individuals cited within the interview data were senior, white academics and most were male. Health inequalities is not, it seems, alone in this respect; Les Back argues that the ‘impact agenda’ is encouraging ‘an arrogant, self-crediting, boastful and narrow’ form of sociology which positions ‘big research stars’ as ‘impact super heroes’ (Back, Reference Back2015: 1).

Moreover, one of the most common complaints made by academic interviewees about the ‘impact agenda’ related to the time-consuming nature of impact-related activities; work that many said was viewed as ‘discretionary’ and which was not accounted for in terms of workload allocation. Several interviewees gave examples of opportunities for knowledge exchange and impact that had involved significant travelling and/or attending evening/weekend events, with obvious implications for personal/family life. Given what is known about the persistent gender divide of caring responsibilities (Wheatley and Wu, Reference Wheatley and Wu2014) and the culture of long working hours that already pervades UK academia (Sang et al., Reference Sang, Powell, Finkel and Richards2015), an under-resourced demand for researchers to achieve research impact has the potential to exacerbate and reinforce career inequities. It may also be, as several interviewees suggested, rather riskier for earlier career academics to engage in the kinds of activities required to achieve impact.

Focusing on the target audiences for impact, several interviewees indicated that the REF impact guidance (which, in theory, rewards examples of impact in a wide range of non-academic settings) was being interpreted in ways which placed a greater emphasis on impact in ‘elite’ policy institutions as compared to non-governmental organisations (NGOs) or local policy or practice. For example:

Academic: I think that probably the incentives favour working with high profile, well recognised, powerful actors in the policy domain. [. . .] So I think there's a bit of an implicit pecking order. . . that if you work with the World Bank or you work with the Treasury or you work with the Department of Health, that probably is seen as more worthy of recognition than working with some third sector organisations. Which I think is probably problematic, not least because working with third sector organisations may actually end up being more impactful. . .

In different ways, these concerns all suggest that the ‘impact agenda’ is reinforcing the distance between traditional (academic and policy) ‘elites’ and others.

(x) The credibility-clarity paradox and the squeeze on critical and ‘blue skies’ thinking

The final concern identified by interviewees relates to two of the most common recommendations for improving the use of research in policy to emerge from reviews of knowledge-transfer studies. These are that researchers should (i) develop ongoing, trusting relationships with potential research users and (ii) provide timely and clear summaries of the policy implications (e.g. Innvær et al., Reference Innvær, Vist, Trommald and Oxman2002; Greenhalgh et al., Reference Greenhalgh, Robert, MacFarlane, Bate and Kyriakidou2004). There is little, if any, acknowledgment within these reviews that these two recommendations might be in tension with one another. Yet, if research results suggest that current policy approaches are significantly flawed, researchers may find it extremely difficult to maintain strong relationships with potential policy users whilst clearly articulating what they believe to be the implications of their research for policy. In other words, as around a third of the academic interviewees suggested, close relationships to policymakers may compromise (rather than aid) researchers’ ability to independently assess and critically analyse policy:

Academic: One of the problems is that if you're pushed to do more and more policy relevant research and to align what you do ever more closely to needs of policymakers and practitioners, I think what's never really discussed. . . is the fact that what you end up doing is. . . potentially losing some of your independence. Even if you. . . try to be an independent researcher. . . it can be very difficult not to make compromises.

This point was already being made by interviewees, such as the above, in the first batch of interviews (i.e. prior to the REF impact system) but remained evident in the more recent interviews – for example:

Academic: It's important that academics maintain that independence. And can one meaningfully maintain independence if you do have a close relationship with policy actors that take a particular ideological position [or] have a particular interest in a particular type of policy? I think that potentially is more difficult. [. . .] I think in addition to promoting the impact agenda, universities need to be a bit wary of that.

The interview data also suggests that academics who wanted to remain research active did not, in the context of the ‘impact agenda’, feel able to work entirely independently of policymakers (see Warren and Garthwaite, Reference Warren and Garthwaite2015). Instead, the most common response to this dilemma was for interviewees to describe making their policy recommendations deliberately vague with the purpose of ensuring they were not perceived to be being too critical of policy audiences. This creates a paradox: whilst academics may well be aware that it is important to be clear about the policy implications of their research, they may also feel that, in being clear about the implications, their relationships with policymakers (and, therefore, their ability to achieve policy influence) are under threat (see Smith, Reference Smith2013).

Reflecting this, around one-third of the academics interviewed suggested that the UK's ‘impact agenda’ could inadvertently (or even, a few suggested, deliberately) encourage researchers to pursue work that is sympathetic to existing, short-term policy directions, on the basis that such research is more likely to have a traceable policy impact; to, as one interviewee put it, ‘bend with the wind in order to get research cited’. Indeed, several academics openly reflected that the current emphasis on producing policy-relevant research was leading them to limit the critical aspects of their work, at least in non-academic contexts (a similar concern regarding the potential threat to ‘blue skies’ work is evident in Manville et al.’s Reference Manville, Morgan Jones, Frearson, Castle-Clarke, Henham, Gunashekar and Gran2014 evaluation). These findings reflect broader concerns that efforts to achieve evidence-based policy may, in fact, do more to promote policy-informed research (e.g. Davey Smith et al., Reference Davey Smith, Ebrahim and Frankel2001).

Findings: assessing the REF2014 impact case studies in social work and social policy

This section considers the highest and lowest scoring REF2014 impact case studies with a view to assessing the concerns outlined above. There are inevitable limitations to the insights that an analysis of these case studies can provide. For a start, REF case studies were written in response to guidance focusing on instrumental and conceptual research use (Weiss, Reference Weiss1979) and the case studies, unsurprisingly, reflect this focus. Moreover, the case studies are designed to present the strongest possible account of research impact to REF review panels, in order to be rewarded with maximum credit, and need to be read in this context. A more in-depth analysis involving interviews with the researchers and non-academics involved would be needed to explore whether any of the examples provided might more accurately be categorized as examples of symbolic research use (Weiss, Reference Weiss1979). We also cannot know from the case studies submitted whether any institutions would have submitted alternative case studies had time-period requirements been less restrictive.

Nonetheless, the analysis of case studies, summed up in Tables 1 and 2 (see supplementary information) provides some sense of which of the various concerns outlined above might be more important in practice. It suggests that case studies describing the influence of bodies of research by a group (albeit small groups led by one or two senior academics) performed better that case studies focusing on the influence of single academics, and that synthesising research was recognised as an important research contribution. Nonetheless, most case studies focused primarily on claiming impact for original research by researchers (and, in all cases, the synthesising work cited appears only to have been used to support primary research). This suggests more could be done to consider how REF2020 might encourage and reward specific efforts to synthesise large bodies of research evidence for non-academic audiences.

Although there were no obvious examples of case studies where research had been employed in ways that appeared to be in tension with the aspirations of the original research, there were several examples where only aspects of cited research seemed to have been influential. Further research is needed to explore researchers’ perceptions of the accuracy of the level of influence claimed in impact case studies.

Both high- and low-scoring case studies include examples of research that was critical of existing government policy but examples of work that might be labelled ‘critical’ in Horkheimer's (1982) more transformative sense of the term were only evident in the low-scoring case studies, potentially supporting interviewees’ claims that it is likely to be harder to demonstrate the impact of work aimed at achieving more substantial kinds of social or policy change (see Figure 1). This implies a need to reconsider the kind of evidence required to demonstrate more significant kinds of research impact.

None of the case studies discussed ethical aspects of the impact described and rather, reflecting the available guidance, appeared to presume that impact was necessarily positive, underlining the need for a discussion about the ‘ethics of impact’.

Finally, the high- and low-scoring case studies suggest that the REF2014 approach to impact is, in multiple ways, reifying traditional academic and policy ‘elites’, raising concerns about equity. In policy terms, most of the higher scoring impact case studies describe achieving national policy change, with far fewer examples of changes to third-sector policies, local practices or public/community circumstances (mirroring Greenhalgh and Fahy's (Reference Greenhalgh and Fahy2015) analysis of submissions to the Public Health, Health Services Research, and Primary Care sub-panel). This suggests that academics responsible for submitting impact case studies to the ‘Social Work and Social Policy’ unit of assessment in REF2014 believed (like several of our interviewees) that institutions were more likely to be able to demonstrate, and/or be rewarded for, influencing high-level government policy than for achieving other kinds of (local level or advocacy-orientated) impact. It is also clear that the institutional score for impact case studies tended to mirror the score for research outputs. Looking at the profile of the lead academics involved in the various case studies raises further concerns about the reification of traditional ‘elites’: visible indicators of ethnicity suggest the case studies involved academics who are almost exclusively white. The mix is far better in terms of gender but still not equal, with eight male leads compared to only five female leads in the high-scoring case studies. In this context, it seems worth noting that an institution's decisions about which staff to submit to REF2014 in terms of research outputs were scrutinized by an Equality and Diversity Advisory Panel, but the impact case studies do not yet appear to have been similarly analysed. Finally, virtually all of the submissions are dominated by senior (Chair level) academics, suggesting it may be harder for earlier career academics to achieve research impact. This is a complex issue but the findings suggest it is worth exploring why these patterns seem to be arising, and how they might be addressed, in more depth (e.g. it might be worth considering a lower threshold of demonstrating impact for earlier career researchers).

Concluding discussion

Academics working in the UK are part of the design and implementation of the evolving ‘impact’ system and, particularly in light of the international interest in such a system, it is incumbent upon us to consider how the current approach is functioning; how it shapes the work we choose (or feel able) to undertake, and how we might either resist or improve this system. It may be that ‘impact’ is viewed by some as simply an additional dimension to the academic ‘game’, opening up additional avenues for career progression but, as Les Back (Reference Back2015) argues for sociology, if we are considering ‘impact’ in job appointments and promotions, then it is shaping our discipline, and our ambitions, in fundamental ways.

The interview data presented in this paper suggest that even those academics who are supportive of the idea that research impact should be incentivised and rewarded have concerns about the way in which the current system is operating in practice. Our analysis of high- and low-scoring impact case studies in REF2014 reinforces at least some of these concerns. For some, these findings may simply add further weight to a belief that we need to resist, critique and challenge the whole idea of the ‘impact agenda’ or, at the very least, its current institutional architecture (see Kearnes and Wienroth, Reference Kearnes and Wienroth2011, for a discussion of resistance to hierarchically imposed understandings of research impact within engineering and physical sciences). Others may wish to consider how the wealth of empirical and theoretical work on policymaking and research practices available within the social and political sciences might be used to better inform efforts to improve research-policy relations (since, somewhat paradoxically, the current approach seems remarkably uninformed by available evidence).

Supplementary material

To view supplementary material for this article, please visit http://dx.doi.org/10.1017/S0047279416000283

Acknowledgements

We are grateful to Ben Baumberg, Martyn Pickersgill, Sotiria Grek and Michael Kattirtzi for comments on an earlier version of this paper and to the panel members for the symposium discussing an earlier version of this paper at the 2015 Social Policy Association annual conference. Katherine Smith was funded by an ESRC Future Research Leaders award during the analysis and write-up (ES/K001728/1) and Ellen Stewart is currently funded by the Scottish Government's Chief Scientist's Office (PDF/13/11).

Footnotes

1 Thanks to Ben Baumberg for suggesting the term ‘bad impact’ when commenting on an earlier version of the paper

References

AHRC (2014), What We Do - Strengthen Research Impact: http://www.ahrc.ac.uk/What-We-Do/Strengthen-research-impact/Pages/Strengthen-Research-Impact.aspx (accessed 10th January 2014).Google Scholar
AHRC (2015), Knowledge Exchange and Partnerships: http://www.ahrc.ac.uk/innovation/knowledgeexchange/ (accessed 14th October 2015).Google Scholar
Alcock, P. (2012), ‘The subject of Social Policy’ in Alcock, P., May, M. and Wright, S. (eds) The Student's Companion to Social Policy. John Wiley & Sons: Oxford.Google Scholar
Back, L. (2015), On the side of the powerful: the ‘impact agenda’ and sociology in public. Sociological Review, 23rd September, 2015. URL: http://www.thesociologicalreview.com/information/blog/on-the-side-of-the-powerful-the-impact-agenda-sociology-in-public.html Google Scholar
Bastow, S., Dunleavy, P. and Tinkler, J. (2014), The Impact of the Social Sciences: how academics and their research make a difference. SAGE: London.Google Scholar
Bekker, M., van Egmond, S., Wehrens, R., Putters, K. and Bal, R. (2010), Linking research and policy in dutch healthcare: infrastructure, innovations and impacts. Evidence & Policy, 6 (2):237–53.Google Scholar
Black, D., Morris, J. N., Smith, C. and Townsend, P. (1980), Inequalities in Health - Report of a Research Working Group. London: Department of Health and Social Services.Google Scholar
Black, N. (2001), “Evidence based policy: proceed with care.” BMJ 323 (7307): 275278.Google Scholar
Boswell, C. (2009), The Political Uses of Knowledge. Cambridge: Cambridge University Press.Google Scholar
Briggle, A., Frodeman, R. and Holbrook, B. (2015), The Impact of Philosophy and the Philosophy of Impact. LSE Impact Blog, 26th March 2015. URL: http://blogs.lse.ac.uk/impactofsocialsciences/2015/05/26/the-impact-of-philosophy-and-the-philosophy-of-impact/ Google Scholar
British Academy (2008), Bringing Both Sides Together - the ‘Co-production’ Model. Section 6 in Punching our weight: the humanities and social sciences in public policy making – A British Academy Report.Google Scholar
Cabinet Office (1999), Modernising Government (White Paper). London: The Stationary Office.Google Scholar
Contandriopoulos, D., Lemire, M., Denis, J.-L. and Tremblay, É. (2010), Knowledge Exchange Processes in Organizations and Policy Arenas: A Narrative Systematic Review of the Literature. Milbank Quarterly 88 (4): 444483.Google Scholar
Dagnino, E. (2007), Citizenship: A perverse confluence. Chapter 35 in: Cornwall, A. (ed.) The Participation Reader. London: Zed Books.Google Scholar
Davey Smith, G., Ebrahim, S. and Frankel, S. (2001), How policy informs the evidence. BMJ, 322 (7280): 184185.Google Scholar
ESRC (2014b), What is research impact?: www.esrc.ac.uk/research/evaluation-and-impact/what-is-research-impact/ (Accessed 10 January 2014).Google Scholar
ESRC (2014c), How to maximize research impact: www.esrc.ac.uk/funding-and-guidance/tools-and-resources/how-to-maximise-impact/ (Accessed 10 January 2014).Google Scholar
Greenhalgh, T. and Fahy, T. (2015), Research impact in the community-based health sciences: an analysis of 162 case studies from the 2014 UK Research Excellence Framework. BMC Medicine 13: 232.Google Scholar
Greenhalgh, T., Robert, G., MacFarlane, F., Bate, P. and Kyriakidou, O. (2004), Diffusion of Innovations in Service Organizations: Systematic Review and Recommendations. Milbank Quarterly 82 (4): 581629.Google Scholar
Hammersley, M. (2014), The perils of ‘impact’ for academic social science. Contemporary Social Science, 9 (3).Google Scholar
Holbrook, J.B. and Frodeman, R. (2011), Peer review and the ex ante assessment of societal impacts Research Evaluation, 20 (3): 239246.Google Scholar
Innvær, S., Vist, G., Trommald, M. and Oxman, A. (2002), Health policy-makers' perceptions of their use of evidence: a systematic review. Journal of Health Services Research & Policy 7 (4): 239244.Google Scholar
Institute for Government (2011), Making Policy Better. London: Institute for Government.Google Scholar
Katikireddi, S.V., Higgins, M., Bond, L., Bonell, C. and Macintyre, S. (2011), How evidence based is English public health policy? BMJ 343:d7310.CrossRefGoogle ScholarPubMed
Kearnes, M. and Wienroth, M. (2011), ‘Tools of the trade: UK research intermediaries and the politics of impact.’ Minerva 49.Google Scholar
Knorr-Cetina, K. (1981), The Manufacture of Knowledge: An essay in the constructivist and contextual nature of science. Oxford: Permagon.Google Scholar
Lomas, J. (2000), Using 'linkage and exchange' to move research into policy at a Canadian foundation. Health Affairs 19 (3): 236–40.Google Scholar
Lavis, J., Davies, H., Oxman, A., Denis, J-L., Golden-Biddle, K. and Ferlie, E. (2004), Towards systematic reviews that inform health care management and policy-making. Journal of Health Services Research & Policy 10: 3548.Google Scholar
Manville, C, Morgan Jones, M., Frearson, M., Castle-Clarke, S., Henham, M-L., Gunashekar, S. and Gran, J. (2014), Preparing impact submissions for REF 2014: An evaluation - Findings and observations. RAND Europe (prepared for HEFCE, SFC and HEFCW).Google Scholar
MRC (2013), Handbook for Applicants and Grantholders 2013. Swindon: MRC.Google Scholar
MRC (2014), Achievements and Impact: www.mrc.ac.uk/achievementsandimpact/ (Accessed 10th January 2014).Google Scholar
Naughton, M. (2005), 'Evidence-based policy’ and the government of the criminal justice system - only if the evidence fits! Critical Social Policy 25: 4769.Google Scholar
Pain, R., Kesby, M. and Askins, K. (2011), Geographies of impact: power, participation and potential. Area, 43 (2): 183188.Google Scholar
Penfield, T., Baker, M.J., Scoble, R. and Wykes, M.C. (2014), Assessment, evaluations, and definitions of research impact: a review. Research Evaluation 23 (1): 2132.Google Scholar
Phipps, D. (2011), A report detailing the development of a university-based knowledge mobilization unit that enhances research outreach and engagement. Scholarly and Research Communication 2 (2): 113.Google Scholar
REF (2011), Assessment framework and guidance on submissions (updated to include addendum published in January 2012). Bristol: REF.Google Scholar
REF (2012), Panel criteria and working methods. Bristol: REF.Google Scholar
Research Councils UK (2011), RCUK Impact requirements – Frequently Asked Questions. Swindon: RCUK.Google Scholar
Research Councils UK (2014), Joint Statement on Impact by HEFCE, RCUK and Universities UK. URL: http://www.rcuk.ac.uk/RCUK-prod/assets/documents/innovation/JointStatementImpact.pdf (Accessed 10th October 2015)Google Scholar
Rothman, B.K. (2015), Research Ethics for Social Scientists: Beyond Protection of the Subject. Seminar, University of Edinburgh, 3rd June, 2015.Google Scholar
Sang, K., Powell, A., Finkel, R. and Richards, J. (2015), ‘Being an academic is not a 9-5 job’: Long working hours and the ‘ideal worker’ in UK academia. Labour & Industry, 25 (3): 235249.Google Scholar
Slater, T. (2012), “Impacted geographers: a response to Pain, Kesby and Askins.” Area 44 (1): 117119.CrossRefGoogle Scholar
Smith, K.E. (2013), Beyond Evidence Based Policy in Public Health: The Interplay of Ideas Basingstoke: Palgrave Macmillan.Google Scholar
Social Policy Association (2009), Social Policy Association Guidelines on Research Ethics. Available at: http://www.social-policy.org.uk/downloads/SPA_code_ethics_jan09.pdf Google Scholar
Stewart, E. and Smith, K.E. (2015), Black magic and gold dust: The epistemic and political uses of ‘evidence tools’ in public health policy-making. Evidence & Policy, 11 (3): 415–37.Google Scholar
Tetroe, J. (2007), Knowledge Translation at the Canadian Institutes of Health Research: A Primer. FOCUS Technical Brief No. 18 (http://www.ncddr.org/kt/products/focus/focus18/Focus18.pdf): 17.Google Scholar
Van Egmond, S., Bekker, M., Bal, R. and van der Grinten, T. (2011), Connecting evidence and policy: bringing researchers and policy makers together for effective evidence-based health policy in the Netherlands: a case study. Evidence & Policy 7 (1): 2539.Google Scholar
Warren, J. and Garthwaite, K. (2015), Whose side are we on and for whom do we write? Notes on issues and challenges facing those researching and evaluating public policy. Evidence & Policy 11 (2).Google Scholar
Weiss, C. (1979), The Many Meanings of Research Utilization. Public Administration Review 39 (5): 426431.Google Scholar
Weiss, C. (1982), Policy Research in the Context of Diffuse Decision Making. Journal of Higher Education, 53 (6): 619639.Google Scholar
Wheatley, D. and Wu, Z. (2014), Dual careers, time-use and satisfaction levels: evidence from the British Household Panel Survey. Industrial Relations Journal, 45 (5): 443464.Google Scholar
Figure 0

Figure 1. ‘Impact ladder’ – significance versus demonstrability

Supplementary material: File

Smith and Stewart supplementary material

Tables S1-S2

Download Smith and Stewart supplementary material(File)
File 30.1 KB