• By Dartington SRU
  • Posted on Friday 27th September, 2013
Evaluation EvidenceIntervention SpecificityStandards of Evidence

Logic model lessons from the kitchen

In his book Cooked: A Natural History of Transformation, Michael Pollan explores why we do what we do when we cook. He looks at what he calls the ‘transformative fundamentals’: cooking with fire, cooking with water, cooking with air and cooking with earth. But he doesn't just learn about why these techniques work. He finds that developing a better understanding of the science behind them helps him to cook better.

It leads to the question of why do we do what we do in children’s services. Why do we train parents in parenting skills, teach children how to recognise their feelings, encourage children’s participation in sport - all things represented by interventions listed on Investing in Children?

Any intervention listed on Investing in Children as 'Blueprints approved' must set out clearly the mechanisms by which the intervention is hypothesised to produce positive outcomes. It must allow us to peer inside the black box - or, to use a motoring analogy, to look under the bonnet and see why turning the ignition and pressing the accelerator should make the car move.

Such hypotheses are increasingly referred to in our field as 'logic models'. A logic model is a representation of how an intervention is supposed to work. It describes simply and clearly why an intervention is expected to achieve desired outcomes for children and families. Sometimes it is called a ‘theory of change’ to reflect the idea that it tracks the steps predicted to change a problem situation.

When we work with innovators who are designing services we get them to develop a logic model. This is part of the process of moving an intervention along the innovation to impact pipeline. Put another way, it helps interventions move 'up' the standards of evidence - specifically the dimension we call 'intervention specificity' - with a view to them becoming the evidence-based programmes of tomorrow. At the Social Research Unit we recently published a short primer on how to do this, called 'Design & Refine'.

The benefits of developing a strong logic model are considerable. An intervention underpinned by sound logic is more likely to be successful than one that is poorly thought-out. Logic models also help communicate the rationale of an intervention to a wide audience, including the practitioners who deliver it, the children and families who receive it, and the people who commission it. Practitioners often make the same discovery as Pollan: that understanding why you are doing something helps you to do it better. Lastly, logic models help evaluators to know what to measure.

There is no set way to develop a logic model, but our preferred approach focuses on risk and protective factors. At its heart is a sketch of one or more routes leading to the poor outcome. The intervention is mapped onto this, showing how its components reduce risks or boosts protective factors. For example, an intervention to improve children’s behaviour might include training to reduce inconsistent parenting and mentoring to provide the child with a significant adult.

As a logic model is being developed two critical questions need to be answered. The first is whether the hypothesised connections are plausible. For example, do the connections ring true with the children and families who will receive the intervention, or the practitioners who will deliver it? Are there circumstances under which the intervention might not work? What are the likely unintended consequences? We recommend convening a group of critical friends and asking them to try to pick apart every connection.

The second question is whether research evidence supports the hypothesised connections. This requires at least consulting one or more scientific experts and ideally conducting a review of the relevant literature.

The task of developing a logic model is challenging. It is easy to compile unwieldy lists of risk and protective factors and outcomes, and hard to articulate the connections between them. The best interventions have logic models that are precise and modest. In the course of developing a logic model it may also become apparent that some intervention components are not a good fit with the desired outcomes. This can be frustrating, but ultimately it is also the purpose of the exercise – to check whether the concept is coherent. The process is iterative, so pruning is to be expected.

No substitute

There is a danger that, by focusing on positive connections, a logic model appears to imply that the intervention will work for everybody. It won’t. Logic models deal with probability. Each one is a hypothesis that providing the intervention increases the chance of positive outcomes. It does not guarantee that the intervention will work. There may also be unintended consequences, some of which could be negative.

So, evidence-informed logic models are no substitute for high-quality impact evaluation. But in order to understand how an intervention works it is also necessary to test whether the hypothesised links in the logic model actually materialise. For example, if the intervention seeks to improve child behaviour by reducing inconsistent parenting, does parenting improve, and does this contribute to improved behaviour? More such ‘mediator analysis’ is needed in our field, hence it is a desirable (or 'Best') criterion in our standards of evidence.

And if such an evaluation reveals that the intervention doesn’t work – what then? It doesn’t automatically mean that the logic model is at fault. It could be that the intervention wasn’t implemented properly, or that it went to the wrong children. In culinary terms, perhaps there was too much water, or the fire was too hot, or the ingredients weren't up to scratch. This shows why having good systems to measure fidelity of implementation is so important - another criterion within the standards.

Return to Blogs