I get the odd email here and there asking why I haven’t posted any blogs over the past several months.
The reason is pretty simple: I’m back in school working towards a PhD in Medical Science, and my spare time is spoken for with research and studying.
So, why would I choose to go back to school at this stage in the game? My career was going well. I was comfortable. I could have kept focusing on growing my business. However, I felt stagnant and my ultimate goal to contribute to the body of knowledge in sport science and to mentor and develop the next generation of strength and conditioning coach in our centre required me to get more training so that I could supervise Graduate students.
You may be thinking that a PhD seems like overkill for someone who is feeling stagnant. I mean there are lots of ways that practitioners in the field of strength and conditioning and sport science try to remain fresh.
For example, I could have taken a professional development course… maybe a course on kettlebell training or something.
I could also attend a few more conferences and maybe double the numbers of hours I spend reading scientific articles.
However, I’m ready for something more. I’m ready to test my theories and to expose what I have found to my peers for scrutiny and criticism.
I was once told that the only things you really know are those you study and find out for yourself.
I think there is a lot of truth in this statement and if my ultimate goal is to add to the body of knowledge in sport science around the adaptive process to strength and power training, I have to move from an independent practitioner who can make as many unsubstantiated claims as he wishes to a real scientist of my craft.
This may seem like a bit of an idealist pursuit given my profession. I mean let’s face it – strength and conditioning for elite sport and fitness are not exactly the most rigorous disciplines when it comes to delivering information that is unbiased and obtained with integrity.
The reality is that studying elite sport is challenging. We have access to a very unique and small subject pool, and the classic double blind randomized control trial with a reasonable sample size of averagely trained individuals is highly limited in its application to elite sport.
In November, I presented at the Australian Strength and Conditioning Association’s International Conference on Strength Training. At the banquet, Dan Baker, the very colourful and well-respected president of the ASCA, said to me: “If I see one more person trying to apply results from a study done on untrained college students to elite athletes I’m going to lose it!”.
We are caught between a rock and a hard place when it comes to studying the niche of high performance sport.
Sport science is tough to do well. I know this firsthand because I’ve been dabbling in this field since 2003 by trying my best to quantify what really impacts the performance of my athletes. But, I do believe it’s possible. The Australians do a great job of this and it’s no wonder they hit way above their weight in the Olympic Summer Games.
If I truly believe in advancing the body of knowledge in some sort of reputable and productive fashion then there is little room for having my sole sources of knowledge be that which I gained from someone’s blog, scientific publication or weekend certification course.
There comes a point in time when our theories and ideas need to be made into some sort of testable hypothesis. The results of this test needs to be reported to our peers, scrutinized, and ultimately weighed against the current body of evidence.
This is the process that yields new paradigms and new ways of thinking that can stand the test of time.
I see this process unfolding everyday in my PhD research group. Most of the group members are at the forefront of understanding the cellular and sub-cellular nature of muscular contraction. The fruits of their research are challenging the boundary of knowledge and the theory around muscular contraction.
It’s inspiring to see the scientific process in its pure form as new phenomena are discovered.
Now this sounds like some sort of peaceful oasis of discovery and high fives but I can assure you it is far from this.
In fact, the other day I saw a very charismatic presentation by a notable scientist. I have to admit I was somewhat taken by his presentation. It just seemed to make sense, and much to my own personal disappointment I went from a mindset of critical thinking to acceptance.
Regrettably, I asked a question that was vague and had nothing to do with the data he presented. A substantial amount of the question and answer period got consumed by his response, and we never really got into the important stuff. I skipped the question that would have scrutinized his results and his conclusions, and went straight to the vague, brain candy, philosophical question…. my bad.
What he had presented was a nice concept… it was interesting, entertaining, and worthy of a spot on TV documentary… however, he did not adequately provide compelling evidence to support his conclusions.
What I can say upon careful reflection was that moving from a critical thinking mindset to one of acceptance is the kiss of death for anyone in a science based profession. Acceptance of ideas, theories and results at face value has the potential to throw us very far off course.
Nothing in my research group is ever taken at face value. There is this general feeling that even if the group finds something novel that it MUST be independently verified by other research groups before it is seen as a fact.
The group presents data and rigorously dissects every aspect of the methodology, results, and conclusions.
Could the presenter really measure what he or she intended to measure?
Does the measurement technique provide adequate precision?
Do the numbers make sense?
I mean do the numbers really make sense?
Just because a confidence interval or p-value gets reported or a really pretty plot with nice colours and convincing trends gets shown, no one, I mean no one in the group takes it at face value.
I am always amazed and impressed at the questions and criticisms that arise from my supervisor following what seems to be a very convincing presentation.
The skill of diving into the methods and results of a study, critically thinking about what has been presented, and asking yourself “is this really the case?” is one that needs to be continually developed and fostered within a group.
Failing to rigorously scrutinize our peers’ work leads many sport scientists and strength coaches astray. Not only are we bombarded by shoddy one-off studies that are taken in isolation but we are also exposed to guru knowledge.
- I bench 800 lbs so I’m an expert.
- I’m 4% body fat so I’m an expert.
- I power clean 180 kg so I’m an expert.
- I train a professional athlete in a highly skill based sport like NHL hockey or NFL football so I’m an expert.
We stop considering the body of evidence, boundary of knowledge, and where the claims and conclusions fit with what is known.
We start skipping to the Practical Applications or Conclusion section of a single paper as the final authority on a training method, nutritional strategy or physiological mechanism.
We never ask to see the results slide again to ask the question: “Do your conclusions actually fit with what your data shows?”
In short, we just trust that what the presenter, study, recommendations, or expert claims can be taken at face value.
I call this the Headline Science Syndrome. Here’s how it works on a large scale:
- A one-liner title gets bounced out into twitter-ville referencing some dramatic conclusion.
- “A new study shows a relationship between variable x and variable y!”
- The buzz happens on email and in conversations.
- It hits the mainstream media and gets air time just after the segment on all the horrors in the world and before the video of a golden retriever who can bark the alphabet.
- The segment ends and those who have just consumed this nugget of “information” in one single whole bite without any sort of active digesting are left questioning their very existence and how everything they have come to know to this point can be completely wrong.
For the strength coach it is tempting to just read the conclusions of a scientific paper and to take it at face value. It’s great to sit back and to consume information like a snake eating a rat….you swallow it whole and leave the digestion to a later time point by some sort of passive process. Screw the active digestion where you examine and scrutinize what has been presented.
Gluttonously consuming information in this fashion makes us feel like we are learning something. It’s brain candy. It gives us something to tweet, cite, quote, and throw out to the world as fact with very little downside in terms of effort and absolutely ZERO upside in terms of helping to advance anyone’s understanding of the body of knowledge.
I’m going to suggest that a true scientist of his craft will not only take the time to chew his “information meal” thoroughly but that he will also attempt to test his empirically obtained theories and beliefs in some sort of systematic method.
I think it is fair to say that science is not the be all and end all… I would be the first to attest to this. Science will always trail behind what happens in the gym and on the field. Empirical evidence will always be easier to obtain.
However, the important step is to transfer what we observe empirically into some sort of testable hypothesis to see if what we observe through our experience really holds up not just in our own studies but through the rigorous scientific study of others.
The ensuing evaluation of the results of our own studies and the studies of others needs to be rigorous and heavily focused on how the study was done, the numbers that came about, and whether or not the numbers really support the conclusions.
It is only in this way that we can truly advance the body of knowledge in sport science, and ensure the Science stays front and centre in Sport Science.