Cultures of Evidence beyond the Health Sector: Understanding policy decision-making in English local government for improving action on the social determinants of health. Project 2: Ethnography of policy decision
SPHR-LSH-PH1-LAS
01 April 2014
1 September 2012
31 October 2013
14 months
Decision Making; Evidence; Built Environment; Local Authorities; Evaluation; Collaboration; Qualitative
The research is completely qualitative, no quantitative data was collected in the case studies. We have used an ethnographic approach to generate the qualitative data on each of the case studies.
We added a series of 3 workshops with Local Government practitioners in the built environment. The stakeholder workshops were not explicitly detailed in the original application, which focused on the ethnographic case studies of policy development and the housing interview study. These were developed to complement these components by allowing more directed discussion about the nature of evidence, its use and production in Local Authority non-health sectors with the users, and thus more specifically address one of the study objectives (Objective d).
We no longer focussed the outputs from the project so explicitly only on knowledge transfer strategies and considered more broadly what we can learn from the study to inform future evaluative research with Local Authority sectors and about how research can be used to influence the policy development process better.
In our programme of work on knowledge needs and practices in English local government we sought to answer the following questions:
We specifically sought to avoid a normative stance that all decisions should be supported by research evidence and that academic research methods are the best way of evaluating practice, in contrast to much of the previous work in this field. We designed a systematic approach to generate both a breadth and depth of understanding, building on previous research.
We began with a systematic review of qualitative (mostly interview) studies asking local policymakers and decisions-makers in sectors outside of public health to describe their understanding of ‘evidence’ as used in their work and to discuss what determines their use of academic research evidence. This review informed three subsequent primary research studies. An ethnographic study of policy and practice in local government provided in depth data about the types of knowledge used and valued in everyday work and how the ‘success’ of activities is defined and demonstrated by officers. We also conducted a qualitative interview study to understand when, why and how academic research approaches have been used to evaluate work in local government, focussing on housing and urban regeneration programmes. Finally, we used focus groups with policymakers in the built environment sector at the local and regional government level to address more directly the types of knowledge they require in their work and with a focus on their perceptions of what researchers could add to this.
Local government officers place great importance on their knowledge of the local environment in identifying ‘problems’ and deciding on appropriate responses. This knowledge is based on both:
Local government officers manage a range of stakeholders and differing agendas in deciding policy and activities (national government policy, local businesses and residents, local politicians, executive managers). Their local data (‘evidence’) is put into conversation with their professional expertise to manage these different needs and agendas and decide on the most appropriate course of action in a given situation and justify this decision.
A narrative of uniqueness supports the importance of local ideas and knowledge and is accompanied by a sense of competition with other local authorities. ‘Innovation’ in practice is also highly valued in the culture of local government. A public narrative of ‘sharing best practice’ does not necessarily lead to transfer of ideas and approaches because of restrictions in time or finances and because often ideas from other areas are not seen to fit the local needs and issues.
Evaluative activities are primarily about justifying decisions and practices to local actors or to higher tiers of government in relation to the range of stakeholders and the different frameworks of accountability that they represent (legal, financial, local participatory/consumer democracy, New Public Management). This range of agendas and accountabilities provides a range of outcomes for judging success. In some senses, an activity will rarely ‘fail’ against all of these outcomes.
Evaluation is not about judging the universal merit of an approach and deciding whether to continue or discontinue it. Within the narrative of uniqueness and innovation, practice is continually evolved, responding to new knowledge from a range of sources and to changes in context (e.g. budgets, public opinion, and national legislation).
Taking housing as a case study, this qualitative study sought to explore the factors that contribute to successful evaluative research in the non-health sector. There have been many calls for more evaluative evidence on the upstream determinants of health, with a consequent focus on the barriers to the production (and uptake) of research evidence. Whilst this literature has been useful in outlining the challenges in collaborative research, this study sought to move this forward and focus on why evaluations do happen.
Eight studies were purposively selected from a systematic review of published intervention studies in housing; that both included health outcomes and utilised a controlled design. The sample included a range of countries; a range of scope of the intervention (from large scale regeneration to smaller household intervention); studies which found evidence and which found no evidence of effect. For each sampled study, we interviewed the principal investigator and at least one other participant who was involved in the study (e.g. research user or non-academic collaborator). A total of 16 interviews were undertaken, including four public health specialists, seven academics, four local authority employees and one national authority employee (some individuals were involved with more than one study).
Interviews used a topic guide to elicit information on how and why the controlled studies were undertaken, where the idea came from, who was involved, what the benefits had been to them and their institution, why they got involved and views on what inhibits and facilitates research in the non-health sector. Interviews were audio recorded and transcribed, with participant consent and transcripts were analysed using thematic content analysis. The study was approved by the LSHTM Ethics Committee.
We found that despite reporting similar challenges to practitioners in studies of why evaluations are not undertaken, the 'barriers' detailed were not insurmountable and they did not prevent evaluative evidence being generated and published. In particular, 'lack of understanding' of the need for evaluative evidence was neither a particular barrier, nor as tied to institutional setting than is often assumed (by both the literature and our own interviewees). There was more understanding amongst non-academics surrounding research design rationales than might have been anticipated. Conversely, academic research was often described as more flexible and ‘messy’ than published papers suggest.
Different ‘cultures of evidence’ across academic and practice sectors were reported but were not seen as barriers. Similarly, that ‘success’ was differently framed across collaborating institutions was not, in practice, a barrier. Both academic and local authority partners reported that anecdotal, qualitative work and common sense might be more appropriate indicators of success than the quantitative evaluation results.
Importantly, most of the studies in our sample were reported as arising from ongoing networks and that the ideas for evaluations often arose organically through the maintenance of these relationships. Crucially, ongoing networks allowed the partners to exploit ‘windows of opportunity’ where funding calls, or planned interventions, provided the possibility for evaluation.
Exploring ‘success’ rather than ‘failure’ enabled us to shed a different light on the issue of how to foster evaluation in the non-health sector. We suggest that for public health practitioners interested in developing the evidence base for areas such as housing, the question is not one of ‘challenges to be overcome’. Instead of focusing our efforts on education about the need for controlled designs for evaluation, a more productive way forward might be to maximise the potential for the right conditions for evaluations to happen. From our interview accounts, these are likely to be: an existing network of collaborators who can take advantage of windows of opportunity for evaluations, and with the commitment, resources and trust to mobilise the larger networks needed to deliver it.
A series of workshops was held with practitioners working in the built environment. These workshops aimed to elucidate how those working on the ‘upstream determinants’ of health conceive of, and utilise, evidence and information in their work. As such, three half-day workshops were held to explore these issues. Participants working on areas of the built environment in local governments were recruited on the basis of the geographical location of their work. The first workshop comprised international participants, the second recruited decision makers from London and the third was formed of practitioners from the North West of England. The workshops utilised a focus group methodology; each had four to six participants and was chaired by a senior academic who used a topic guide and prompts to facilitate the discussion. Each workshop was audio recorded and transcribed. Both deductive and inductive analysis techniques were employed; a coding structure was defined based on existing literature on evidence and policy and researchers’ discussions of emergent findings. The codes were applied to the data and refined throughout the analysis. All participants provided informed consent to partake in the study and ethical approval was obtained through the London School of Hygiene and Tropical Medicine.
A total of fifteen decision makers, from a range of backgrounds including planning, housing and leisure services, participated in the workshops. The workshop analysis is still ongoing, and some of the emergent findings are presented below. The participants had wide conceptualisations of what constitutes evidence; including a range of various routinely collected data sources, surveys, maps, projections, records, government guidelines, anecdotes, case studies, models, drawings, photographs and academic research evidence. More emphasis was placed on utilising data and information, as opposed to ‘evidence’ in a more academic sense. Decision makers in local government underscored that case studies, either of their own innovative work, or from a similar context, are more useful than generalised evidence. Within this discussion, they specifically highlighted the issue of generalisability, specifically in terms of geography and temporality. That is, decision makers in local government believe that evidence emerging from areas that are most similar to theirs, and evidence that is relatively recently produced, is the most useful for making decision.
The decision makers identified a range of influences and considerations to rationalise both the use, and non-use of evidence, when making decisions. Evidence is used when it provides support and justification for actions or programmes; it is also used to create policies that will be tested through public inquiry and where a legal requirement exists to use evidence. Decision makers also describe how evidence is particularly useful when making bids for money in order to show the value of a chosen project or course of action. On the other hand, decision makers also identified some of the constraints to evidence use, such as the political nature of local government. Specifically, politicians are seen as needing to balance a range of priorities and considerations, of which utilising evidence is only one. In addition, decision makers, particularly from the North West of England, highlighted the constraints of the current economic environment. Given the current budget cuts, decision makers describe a climate of reduced resources for gathering and analysing data, a lack of funds to commission research and an overall emphasis on costs and cost reduction. Finally, the decision makers described a number of challenges to utilising evidence, including understanding and employing suitable methods for evaluation, the complexity and relevance of academic studies and the challenge of synthesising large bodies of evidence.
This study highlights the need for locally relevant evidence for local government decision makers working in the built environment. It has shown that public health and the built environment may have different cultures of evidence, but there is now an opportunity, as public health has moved from the NHS to local government, to work with, and learn from, one another. This study helps lay the foundation for useful engagement between public health and the built environment sectors of local government by achieving a deeper understanding of their evidence cultures.
Using evidence to describe problems and decide on solutions is very important to public health professionals, such as using research from universities to decide how to promote healthier lifestyles amongst the public. In April 2013 public health was moved out of the NHS into local authorities. Public health professionals now have to change their ways of working to fit more with how people work in local government. Many health professionals have been worried that people working in local government do not think of evidence in the same way that they do and therefore question how good their work can be. It is also a concern for researchers – if people in local government aren’t interested in the work that they do and the evidence they provide, what is the point of their work?
Rather than assume that these views and opinions are true we decided to do some work investigating how decisions are made in local authorities around services that might impact on people’s health, such as alcohol and gambling licensing , housing, parks and leisure facilities, community safety and transport. We did three things:
We found that:
This was a year of development work to inform the future research programme at LSHTM. As such, the research aims and objectives were determined by the research team in relation to our current knowledge and understanding. The level of public involvement in the design, conduct and interpretation of these particular research projects was therefore limited to the strategic direction provided by the management committee which includes advisors from local government.
This programme of research has led to two clear actions. First, officers and practitioners identified challenges surrounding suitable metrics and data for local government. As a result, we are currently engaged in producing a practitioner output for planners and developers, that seeks to address some of these issues. In addition, participants also identified ongoing challenges surrounding evaluation of their work. With public health now located within the local authority in England, our work showed that the flow of better evaluation evidence could be increased if academics were to focus more on developing such networks and in finding and developing common areas of interest between practitioners in local authorities and public health researchers (more fruitful than emphasising differences and fostering the view that evaluation research is difficult). In response to these findings and needs, SPHR@L has begun a workshop aimed at local government decision-makers to learn about, and work through collaboratively, their evaluation needs and methods. The first workshop was held in April 2014 and received positive feedback from the participants. We plan to continue revising the agenda and conducting these workshops in the future.
This project was funded by the National Institute for Health Research School for Public Health Research (project number SPHR-LSH-PH1-LAS).
The views and opinions expressed therein are those of the authors and do not necessarily reflect those of the NIHR School for Public Health Research, NIHR, NHS or the Department of Health.
© NIHR 2024