Evidence based what?

Evidence-based practice (EBP) is an offshoot from a movement within medicine and health services (Evidence-based Medicine, EBM), that called for clinical decisions to be based on the best available research knowledge and critical scientific evidence. The origins of EBM are vague but are usually attributed to health researcher Archie Cochrane (1972), who complained that medical practice seemed to be valuing personal opinions over scientific experimentation. Although seeming obvious, EBM was at the time a novel approach to medical practice. The medical community’s call for EBM argued that knowledge was indeed dynamic and that the latest evidence should be incorporated into practice to optimise treatment outcomes. Today, the EBP concept is used as a benchmark by policymakers and funders, and most professions are being confronted by EBP, with Exercise Professionals being no exception. The appeal of EBP rhetoric is pretty difficult to ignore. After all, who wouldn’t want to be tended by a professional practicing using the best available evidence? Well, a 10th man might have some contrary thoughts on that! The word ‘evidence’ is, of course, a loaded one. We know from legal trials that evidence is not simply accepted as fact, rather it helps build a case. I wonder whether exercise EBP advocates have really considered the nature of the work of exercise professionals, or the types of research that are considered evidence for the profession.

page1image12315904

The 10th man (TTM) is referencing strength and conditioning coaches in this blog, but the core arguments likely remain applicable to all exercise professionals. Strength and conditioning coaches face the daily challenge of designing and delivering training sessions that are taxing and yet stimulating for athletes. Their programmes are expected to be substantiatable yet at the same time they are respected for being innovative and ‘cutting edge’. These expectations are in a way further contradicted by a tendency for most sports and sports training to be firmly tradition-bound, persisting with what’s ‘worked before’ or has brought success to others. TTM could argue that both the old and the new probably somewhat lack in scientific rigour. In the face of these multiple sources of potentially dissonant and contradictory information, a strength coach ultimately has to stand in front of athlete(s) and confidently present a training system that gives no hint of the indecision that may have preceded their choices!

The complexity of physical activity, exercise and training presents exercise professionals with several unique dilemmas. Their work is based on scientific principles, but exercise professionals have to respond to the nuances of human biophysical responses and adaptations, and the capriciousness (there’s a word loved by the 10th man!)of human behaviour. While the relationship between exercise stimuli and responses may be directionally predictable, the rate and magnitude of a response are dependent on an individual’s physiological and emotional reactivity. For example, an individual might or might not choose to follow exercise advice, and even if they do exercise, they might “maximally” or “barely” exert themselves within sessions. Skinner et al (2001) illustrated this beautifully in the Heritage Family Study, a large trial where participants followed an aerobic conditioning programme for 30mins, three times per week over 20 weeks. The average increase in VO2max was 19%, and although everyone received similar training stimuli, 5% increased more than 40% while 5% had little or no change in aerobic fitness. A handful even lost fitness! As precise as we like to think we are, individuals will respond differently to identical programmes, or put another way, if we want to achieve similar fitness results in a group of athletes, we are likely going to have to train them differently. With so many variables capable of influencing outcomes, competent exercise professionals understand that the prescription of physical and physiological stressors can be at times, pretty inexact. These physical factors may, in reality, be less complex than a client’s behaviour or the multiple other determinants of exercise outcomes. Collectively many factors complicate the training process and emphasise the dilemma of trying to establish proof or evidence in the exercise and training domain.

Unlike medicine where treatments can be applied and symptoms monitored, a strength coach’s input is (TTM thinks) once removed from targeted outcomes. By this, TTM means that although physiological and physical variables can be progressed and measured, how these impact on health, wellbeing or sporting performance may be entirely subjective. For example, a strength coach might deliver a well-designed periodised programme to improve a rugby player’s speed. Any observed improvements in on-field speed could, however, be attributed to their new boots, the pre-workout supplement taken, improved technique, improved sleep patterns, muscle fibre type, or the advice a mental skills coach dispensed. Linking effect with a cause is problematic in sport, which in turn makes it challenging for an exercise professional to evaluate the efficacy of an existing practice or new research.

The research

Randomised controlled trials (RCTs) are considered the strongest form of evidence (Kirmayer, 2012), but those three words; randomisation, control and trialing, are worth considering further within an exercise context. As well as randomisation into treatment groups, it’s common for RCTs to use restrictive recruitment and inclusion criteria. These well-intended practices can provide a false sense of homogeneity of effects, with actual differences being underestimated (Kravitz et al, 2004). The average change we observe may be concealing the heterogeneity of responses; in reality, a few individuals could have had meaningful benefits (great), some may have experienced little change (near useless), and for a few, the intervention might have been unfavourable (worse than useless). In practice, based on their experience and knowledge and understanding of the individual, practitioners have the discretion to prescribe differently for each individual. Although random assignment may be experimentally correct, it obviates a practitioner’s discretion to identify who might benefit most from a particular intervention, which is practically important.

‘Control’ suggests that all elements not directly related to an intervention are held constant to avoid any potential interaction between those elements and the method being investigated. We could miss something vital here; something that could inhibit or embellish a treatment’s efficacy, and in turn, drastically alter the effect in an actual training setting. Green (2006) suggests that rather than understanding how various variables might influence an outcome, researchers often think of these variables as challenging nuisances that have to be controlled – so those influences are typically negated or neutralised. Practice-based research needs to include those ecologically important variables where possible, to help understand their influences and interactions and individual differences. The 10th man wonders whether it should be mandatory for every exercise/ sports science researcher to periodically spend time embedded in a fitness centre or with a sports team, in order to fully appreciate the contexts that they are researching.

The final element of RCTs, ‘trials’, suggests that a very prescriptive intervention is trialed for a study. In reality, the size, frequency, and progression of a ‘dose’ need to be based on measures, responses or observations, rather than something predetermined. Trials

are typically time-limited with most exercise training trials spanning** 8–12 weeks**. Practitioners need to know about the long terms effects of consistently ‘dosing’ an athlete with a training method, or how sustainable is the training intervention?

Where does this leave us?

EBP seems to viewed by some as a panacea; a necessary means to demonstrate accountability, and to perhaps regain some semblance of control over ‘untrustworthy’ information. The 10th man considers both sides of the EBP debate as having some merit. Any professional can find reasons to be dismissive of research findings. We may not embrace findings if we are not comfortable with them or if the suggestions are not consistent with our beliefs and practices. Researchers can be equally disparaging of practice if they believe it lacks rigour or it challenges dominant paradigms. We do need to remember that a lack of evidence for effectiveness is not the same as not being effective. Researchers have the power to avoid being dismissive of practices and to recognise that they can learn a whole lot from practitioners. Through better understanding, researchers can help practitioners feel more comfortable exploring and describing their practices and interacting with researchers and their research. Practitioners also have the power to approach research with a positive mindset, perhaps not seeking definitive answers but understanding that all knowledge can provide research ‘nuggets’ that may help inform practice. In the absence of appropriate research, competent exercise professionals will continue to do what they have always done – use their knowledge and experience to reason professionally, to fill in the gaps in their understanding, and to continue to evolve the profession. That spells Practice Based Evidence – That’s what I’m thinking today anyway.

Best, Phil