• By Dartington SRU
  • Posted on Sunday 18th May, 2014

Real world dragging down your programme?

It is acknowledged as one of the more delicate moments in the development of evidence-based programmes, and the place where many go wrong. The moment in question is the transition from the ‘laboratory’ to the ‘real-world’– in other words, when the science is truly put into practice.

The question is implementation. How do you make sure it is done effectively? In 2005, Dean Fixsen, co-director of the National Implementation Research Network, together with his colleagues identified the essential components of any implementation practice or programme. Staff selection and training are key but so are ongoing consultation and coaching along with staff and programme evaluation.

However, because evidence based programmes (EBPs) are almost always part of social policy, managing the political component is critical. But, they say, blindspots exist in the literature.

Fixsen and his colleagues mentioned ‘system interventions’, in other words, strategies to ensure the availability of the financial, organisational and human resources required to support the practitioners. But they argued that while there is indirect evidence showing these to be important, further research is urgently needed.

Staffan Johansson, from the University of Gothenburg, Sweden, argues that such implementation problems have been extensively researched in the field of public administration. He wonders if this research can fill the gap.

He points to Michael Hill and Peter Hupe’s state of the art review of public administration, also published in 2005, which examined 165 articles in 33 journals. That review identified seven independent variables that defined whether an implementation was successful or not. These variables fall into two clusters that mostly focused on political concerns: those that have an affect on the substantive policy at the system level, and those that control the institutional context at the societal level.

“If the organisation level seems to be the frontier in the research on EBP, the system level and the societal level seem to be the new frontiers in research on policy implementation” says Johansson.

In comparing the EBP implementation review that Fixsen conducted with the implementation review in public administration, Johansson was struck by their different understandings of the problem. Of 418 references cited by Fixsen and colleagues, and 508 in Hill and Hupe’s study, only four appear in both.

To illustrate how the two traditions might be exploited, Johansson describes an attempt by the Swedish health care and social service sectors to implement national evidence-based guidelines for people with drug abuse problems.

In order to help detect pitfalls in implementing this policy, he draws on the Hill and Hupe review to recommend seeking answers to seven questions:

(1) Is the policy clearly specified, resourced, and supported? Are there any hidden conflicts which will create ambiguity?

(2) Should the policy allow for policy adjustments in the implementation chain between different actors and agencies, and also over time?

(3) How is the policy conceived on different vertical levels, from central government to local municipalities?

(4) What kind of behaviour can be expected from the implementation agencies and their professional groups? To what extent can this be controlled?

(5) What kind of problems can be expected in the horizontal inter-organisational relationships, when there is a need for cooperation between different authorities and organisations?

(6) What kind of responses from those affected by the policy can be expected?

(7) How is the policy conceived by important stakeholders in the policy context?

Once that was complete, Johansson says, “In the more practical implementation efforts in each agency, the concepts and findings from the research on implementation of EBP will probably be helpful.” These findings, he says, can guide “how management can develop programmes for staff selection, training and coaching, for administrative support, and specifically for getting the tools to measure staff and programme fidelity.”

In other words: start with Hill and Hupe to make sure the policy has the right environment to grow, then use Fixsen to make sure it’s being correctly tended in that environment.

Some caution is required, Johansson argues, because “much of the American research is based on an organisational context where a great deal of human service work is executed by non-governmental organisations, and politicians (as opposed to the situation in many European countries) are to a great extent absent from the organisational life in which policy is implemented.”

But his analysis amply demonstrates a need for cross-pollination between the two research fields in implementation. He says, “If the implementation challenge is seen not only to be about bridging the gap of knowledge between research and practice, but also to include issues concerning resource allocation, priorities, ethical considerations, the distribution of power between politicians, administration, professional groups and client groups, and, to a great extent, to exceed organisational boundaries, there is certainly a need to include analytical tools from the research on public administration.”


Johansson, S. (2010) ‘Implementing evidence-based practice and programmes in the human services: lessons from research in public administration’, European Journal of Social Work, 13 (1), 109-125.Fixsen, D. L. et al. (2005) Implementation Research: A Synthesis of the Literature. Tampa, FL: University of South Florida.Hill, M. & Hupe, P. (2005) Implementing Public Policy: Governance in Theory and in Practice. London: Sage.

 

Return to Features