It makes a worthwhile bite in SF mostly when it’s contrasted with the leaning in a problem-saturated world towards exploring what doesn’t work. We can then make a valuable distinction about what’s likely to happen when we put our attention one way or the other.
Now governments and other funding bodies are increasingly interested in what works, under the auspices of ‘evidence-based’ regimes that decide what does and doesn’t get funded. This is clearly a rather different ‘what works’, as no SF practitioner would argue that anything that happens to work is thereby SF.
This kind of 'what works' can perversely or ironically discriminate against SF. For example, SF practitioners don’t start with an exploration of problems in order to produce a clinical diagnosis which you then go about curing. If the funding bodies are looking for ‘what works with schizophrenia’ for example, with no such assessments, the SF work on patients who might have attracted such a diagnosis (unhelpfully in the SF view) doesn’t even get to the starting line.
And so in a world of ‘evidence-based’, SF may be fated to remain a counter-culture movement, spread by rumours of success.
It’s also true that we can do comparative studies, for example with patients who diagnosticians say have schizophrenia or other labels, but that’s not quite the same as working with clients who’ve yet to be weighed down with labels.
That then is a dilemma. To attract more attention and funding, SF needs a stronger ‘evidence base’, and must find ways to put up a good showing on playing fields that are far from level.
How can we be more part of the influential conversations at the top tables? How can we increase the amount of research that goes along with our work from clinics to coaches to organisations? How can research students be tempted to use their ingenuity to measure like with like, so we can show (I expect) that SF is indeed ‘what works’?