Devil’s in the details

Sometimes I wonder about the supposed advances in sciences, particularly the rapid emergence of technologies, and whether they are often more problematic than helpful. We live in a world where we prefer to see things as causes and effects. Mostly we like to believe that when something happens there has to be an identifiable cause or reason for that event. It’s especially helpful to our psyche if we can assign some blame to that cause because then we can seek ways to rectify any causative factors. Those floods were caused by global warming which in turn was caused by our irresponsible and continued use of fossil fuels. That earthquake we can probably blame on the fracking that we invited into our backyards. All in all, we all like a good cause and effect story, we like that perceived certainty.

Our world of health, fitness, exercise, and sport is of course largely based on cause and effect relationships. If you put on weight it was because you were not exercising enough and or you were eating too much. If you got injured you didn’t stretch or warm up enough. If we get you to lift some weights two or three times per week, this will cause you to get bigger and stronger. So we are constantly trying to cause changes by manipulating various parameters. And that’s all and good because this is what we do. People come to us seeking effects and we come up with programmes based on our understanding of the things that we think cause those effects.


So how could my inner 10th man perceive that any of this could be problematic? I get nervous when exercise professionals get carried away and overconfident with their beliefs that they really know and understand the links between causes and effect. I had a student a few years back, with who I was doing a case review. His client, let’s call him Tom, was seeking to get a bit fitter, lose a little weight and to mainly support his partner through her exercise initiative. When I reviewed the student’s proposed programme, all I was seeing was a lot of what we could describe as prehabilitative (whatever that is?), rehabilitative and core (as in ‘trunk’ core) exercises. So nothing really in the programme that appeared to address getting fitter or losing weight! My first instinct was to go back to the client profile and check that I hadn’t missed something identifying an injury or a specific problem that would justify or explain the suggested exercise programme. Nothing! So I asked the student – what’s going on here? I can’t see that the client has any problems – have I missed something? It turned out that he had just completed a workshop, loosely functional movement screen style stuff. He had used this with Tom ( a habitual non-exerciser) and had identified potential problems that might arise for Tom. This was great learning for me, and the student as we discussed how he was using innovative methods (well done) that had prompted him to identify or diagnose problems that did not exist for Tom. In the right context this may have been considered good practice, but in this case, Tom was not really having his needs addressed and was getting something that he arguably didn’t require.

With advances in technology, heart rate measurement has become increasingly more accessible and more accurate. Technology has moved so rapidly from ‘you can wear this strap and monitor your heart rate during your workout’ to ‘you can wear this watch and monitor your heart rate 24/7’. So now the thinking goes – since we are now generating all of this heart rate data it would be a real shame not to mine it a little further and see whether we can detect meaningful patterns. Patterns that MAY be associated with illness, your readiness to perform, your state of recovery or preparedness. Do you see where I am going with this? My inner 10th man suspects that with all of these advances we now have the ability to overmeasure and under understand (the alliteration is all mine – deal with it). I’m suggesting that we are often choosing to ignore some things (what we see, what people tell us) and instead giving priority to, and trusting the data. I suspect that we are often times misusing data to diagnose problems that don’t actually exist. Now I understand the broader value of these technologies and the usefulness of data mining and monitoring. However, we need to continue to see the bigger picture. If we are genuinely playing our 10th person role we should be asking searching questions. What are we I missing here? Are we really seeing cause and effect relationships or are we seeing coincidences or associations that don’t cause effects? Is there any potential harm in using this data in this way? Did we maybe miss something really important because we were looking the wrong way? Give it some thought and give me some examples.

Best, Phil