All posts from 2013

Why Measuring Power Led Me Astray

by Matt Jordan on March 8, 2013 5 comments

Article Overview: 1355 words on assessing strength abilities and mechanical power. I focus on the four reasons that early in my career as a young strength coach, quantifying the impact of my programs turned out to be tougher than I thought it would be.

In this article, there are four main lessons for the young strength coach: (1) physiological factors like maximal muscle power and the output we measure from various movements like vertical jump mechanical power aren’t the same thing; (2) mechanical output in the vertical jump can remain depressed even after unloading periods due to the persistence of peripheral fatigue factors – thus, vertical jumping and the various performance variables may be great markers for neuromuscular readiness; (3) vertical jumping is a great test for assessing explosive strength abilities – but it is still a movement that may or may not be related to an athlete’s sport-specific requirements; (4) shoot for gold standard assessments because measurement error and fundamental mistakes in calculating variables of interest can lead you astray.

Early in my career as a strength coach, I learned about the many different strength qualities. I prefer to call them strength abilities though. Qualities are not quantifiable. Strength abilities are quantifiable.

I also learned about the importance of quantifying strength abilities to quantify the impact of my programs. However, I struggled to find consensus on the topic of assessing strength abilities.

First, let’s take a brief dive into some of the relevant strength abilities. These include maximum strength (maximum force produced irrespective of time), explosive strength (the rapid application of force per unit of time or rate of force development), and maximal muscle power (the muscle work rate).

These strength abilities are well-defined in the scientific literature and appear in exercise physiology textbooks. Early research by A.V. Hill in the 1930’s depicted the hyperbolic muscle force velocity relationship, and the parabolic power velocity relationship.

Both maximum strength and maximal muscle power are often discussed in the context of the muscle force velocity relationship. Here, the maximal force producing ability of a single muscle fibre or whole muscle can be quantified across a range of shortening velocities.

It is somewhat natural and logical to extend the work of A.V. Hill to the whole body level. This led to many different methods for evaluating the whole body force velocity or power velocity relationship.

I should note that when we evaluate whole body power, we are now referring to the mechanical power generated by the system, which results from the many muscles working together in a synergistic and coordinated manner. Of course this would be highly related to the maximal muscle power ability.

Now back to the story. As a young strength coach it did not take me long to see the positive association between powerful athletes and sport performance. I use the word power here as a qualitative term. This word is also used interchangeably with explosive. The use of the terms powerful and explosive are often bemoaned by purists in biomechanics but I think you understand what I mean.

The controversy or challenge arose when it came to quantifying mechanical power. I had a mentor who told me not to waste my time trying to evaluate “power”. He encouraged me to focus on assessing muscle mass/body composition and maximal strength. In his opinion, both of these factors were highly trainable, possibly more influential for developing maximal muscle power, and he had never had an experience where assessing “power” enhanced his programming.

On the other hand, I read many scientific papers through the 90’s that encouraged strength coaches to evaluate maximal mechanical power in the vertical jump

My first run at assessing mechanical power led me to purchase a jump mat (contact mat). I found an equation to convert jump height to “power” and I began testing athletes. I then moved onto a position transducer that used the bar velocity and the external load of the system to yield a “power” value.

The long story short is that both methods for evaluating “power” did not seem overly sensitive to the training process. In fact, I would give my athletes a couple of days rest at the end of a training cycle, measure “power” with one of the two methods above, and I would often find that “power” had decreased from the baseline test.

The first problem was that I had underestimated the persistence of peripheral fatigue or low frequency force depression. A couple of days rest was insufficient for a full recovery in the higher end neuromuscular abilities like maximal muscle power and explosive strength.

The second problem was my equipment and my measurement techniques. The linear position transducer was glitchy. My uncertainty was further intensified when I attended the International Conference on Strength Training in Colorado Springs, and listened to a presenter who casually pointed out that many scientific papers had failed to account for the barbell mass and the system mass when calculating this elusive “power” value with the position transducer method.

At this point I was relatively convinced that I was chasing my own tail when it came to measuring “power”. You will also notice that I continue to put “power” in scare quotes as I don’t think any of the above mentioned methods were measuring “power” per se.

At this point, I am now 9 years into my S&C career and I was not about to give up on assessing mechanical power. I upgraded my equipment and purchased a force plate. There was some additional math involved to calculate power. After solving for acceleration from the equation F=ma, time integration of the acceleration vs. time curve yields a velocity vs. time curve. Power is the work rate, and work equals force x distance. So, power equals force x distance divided by time. This can be rearranged as force x distance/time or force x velocity.

So, I finally had my solution or so I thought. I had gold standard equipment, a scientifically supported method for calculating mechanical power, and while there were still the biomechanical purists criticizing this approach, I felt far more confident with my methodology.

But I still struggled to find utility for this metric “power”. Maybe it is because “power” is just a correlate to jump performance, and jump performance is best quantified by calculating an athlete’s takeoff velocity using Newtonian mechanics.

The other issue is that I might have been confusing the importance of maximal muscle power and mechanical power assessed in the vertical jump. While maximal muscle power can be essential for sport performance, the vertical jump movement from which we are obtaining mechanical power may have nothing to do with sport performance.

I went on to measure mechanical power in various forms of jumping from 100’s of athletes in many different sports.  Here’s a breakdown of the relative peak power (W/kg) by sport and sex.


The gist of things is obvious:  the athletes in the more explosive sports generate more mechanical power in jumping.  If I broke the plot down by performance level you would see that power is also related to the level of performance with more elite athletes tending to be more powerful than development level athletes.

As you look at this, let me remind you of the difference between correlation and causality.  A high degree of correlation means that two variables are related, and I have found that peak power correlates extremely well to performance in a variety of sports.

However, correlation does not imply that variable ‘x’, causes variable ‘y’. This might be the main problem with assessing mechanical power in the vertical jump. While correlated to sport performance and jump performance, mechanical power in the vertical jump isn’t the same as maximal muscle power and often fails to discriminate top performers.

Here are a few more thoughts and considerations on the limitations of measuring mechanical power and why it led me astray:

  1. I will say it again – jump performance is heavily affected by fatigue and training stress so it can remain depressed for the majority of a training cycle and even after a few days of rest (this makes it interesting as a parameter to evaluate neuromuscular readiness or for building reaction curves to training).
  2. Peak power and more importantly vertical jump ability often do not improve at the same rate as competitive performance in many sports (i.e. you get a whole lot better in your sport than you do in a test of peak power in a jumping movement).

And finally,

3. Lots of athletes don’t participate in sports that require an optimization of peak muscular power

To this point, peak power in a jump is hit somewhere around 400 ms (at a minimum) and it is more likely a value of 500-700 ms after the onset of the jump. This is an eternity compared to many sport movements.

So, if the sport in question involves speed or acceleration (which most sports do), where contact times for the foot against the ground are 100-300 ms, peak power in a countermovement jump is not overly relevant.

That is my opinion, and this is why measuring peak power led me astray initially in my career. But like most things, it’s rarely good vs. bad and more how a parameter or assessment is used. The key is to determine this in the iterative process of determining what matters, measuring what matters and changing what matters as it relates to sport performance.

read more
Matt JordanWhy Measuring Power Led Me Astray

Focusing on the “Science” in Sport Science

by Matt Jordan on February 2, 2013 3 comments

I get the odd email here and there asking why I haven’t posted any blogs over the past several months.

The reason is pretty simple: I’m back in school working towards a PhD in Medical Science, and my spare time is spoken for with research and studying.

So, why would I choose to go back to school at this stage in the game?  My career was going well.  I was comfortable.  I could have kept focusing on growing my business.  However, I felt stagnant and my ultimate goal to contribute to the body of knowledge in sport science and to mentor and develop the next generation of strength and conditioning coach in our centre required me to get more training so that I could supervise Graduate students.

You may be thinking that a PhD seems like overkill for someone who is feeling stagnant.  I mean there are lots of ways that practitioners in the field of strength and conditioning and sport science try to remain fresh.

For example, I could have taken a professional development course… maybe a course on kettlebell training or something.

I could also attend a few more conferences and maybe double the numbers of hours I spend reading scientific articles.

However, I’m ready for something more.  I’m ready to test my theories and to expose what I have found to my peers for scrutiny and criticism.

I was once told that the only things you really know are those you study and find out for yourself.  

I think there is a lot of truth in this statement and if my ultimate goal is to add to the body of knowledge in sport science around the adaptive process to strength and power training, I have to move from an independent practitioner who can make as many unsubstantiated claims as he wishes to a real scientist of my craft.

This may seem like a bit of an idealist pursuit given my profession.  I mean let’s face it – strength and conditioning for elite sport and fitness are not exactly the most rigorous disciplines when it comes to delivering information that is unbiased and obtained with integrity.

The reality is that studying elite sport is challenging.  We have access to a very unique and small subject pool, and the classic double blind randomized control trial with a reasonable sample size of averagely trained individuals is highly limited in its application to elite sport.

In November, I presented at the Australian Strength and Conditioning Association’s International Conference on Strength Training.  At the banquet, Dan Baker, the very colourful and well-respected president of the ASCA, said to me: “If I see one more person trying to apply results from a study done on untrained college students to elite athletes I’m going to lose it!”.

We are caught between a rock and a hard place when it comes to studying the niche of high performance sport.

Sport science is tough to do well.  I know this firsthand because I’ve been dabbling in this field since 2003 by trying my best to quantify what really impacts the performance of my athletes. But, I do believe it’s possible.  The Australians do a great job of this and it’s no wonder they hit way above their weight in the Olympic Summer Games.

If I truly believe in advancing the body of knowledge in some sort of reputable and productive fashion then there is little room for having my sole sources of knowledge be that which I gained from someone’s blog, scientific publication or weekend certification course.

There comes a point in time when our theories and ideas need to be made into some sort of testable hypothesis.  The results of this test needs to be reported to our peers, scrutinized, and ultimately weighed against the current body of evidence.

This is the process that yields new paradigms and new ways of thinking that can stand the test of time.

I see this process unfolding everyday in my PhD research group.  Most of the group members are at the forefront of understanding the cellular and sub-cellular nature of muscular contraction.  The fruits of their research are challenging the boundary of knowledge and the theory around muscular contraction.

It’s inspiring to see the scientific process in its pure form as new phenomena are discovered.

Now this sounds like some sort of peaceful oasis of discovery and high fives but I can assure you it is far from this.

In fact, the other day I saw a very charismatic presentation by a notable scientist.  I have to admit I was somewhat taken by his presentation.  It just seemed to make sense, and much to my own personal disappointment I went from a mindset of critical thinking to acceptance.

Regrettably, I asked a question that was vague and had nothing to do with the data he presented.  A substantial amount of the question and answer period got consumed by his response, and we never really got into the important stuff.  I skipped the question that would have scrutinized his results and his conclusions, and went straight to the vague, brain candy, philosophical question…. my bad.

What he had presented was a nice concept… it was interesting, entertaining, and worthy of a spot on TV documentary… however, he did not adequately provide compelling evidence to support his conclusions.

What I can say upon careful reflection was that moving from a critical thinking mindset to one of acceptance is the kiss of death for anyone in a science based profession.  Acceptance of ideas, theories and results at face value has the potential to throw us very far off course.

Nothing in my research group is ever taken at face value.  There is this general feeling that even if the group finds something novel that it MUST be independently verified by other research groups before it is seen as a fact.

The group presents data and rigorously dissects every aspect of the methodology, results, and conclusions.

Could the presenter really measure what he or she intended to measure?

Does the measurement technique provide adequate precision?

Do the numbers make sense?

I mean do the numbers really make sense?

Just because a confidence interval or p-value gets reported or a really pretty plot with nice colours and convincing trends gets shown, no one, I mean no one in the group takes it at face value.

I am always amazed and impressed at the questions and criticisms that arise from my supervisor following what seems to be a very convincing presentation.

The skill of diving into the methods and results of a study, critically thinking about what has been presented, and asking yourself “is this really the case?”  is one that needs to be continually developed and fostered within a group.

Failing to rigorously scrutinize our peers’ work leads many sport scientists and strength coaches astray.  Not only are we bombarded by shoddy one-off studies that are taken in isolation but we are also exposed to guru knowledge.

  • I bench 800 lbs so I’m an expert.
  • I’m 4% body fat so I’m an expert.
  • I power clean 180 kg so I’m an expert.
  • I train a professional athlete in a highly skill based sport like NHL hockey or NFL football so I’m an expert.

We stop considering the body of evidence, boundary of knowledge, and where the claims and conclusions fit with what is known.

We start skipping to the Practical Applications or Conclusion section of a single paper as the final authority on a training method, nutritional strategy or physiological mechanism.

We never ask to see the results slide again to ask the question: “Do your conclusions actually fit with what your data shows?”

In short, we just trust that what the presenter, study, recommendations, or expert claims can be taken at face value.

I call this the Headline Science Syndrome.  Here’s how it works on a large scale:

  • A one-liner title gets bounced out into twitter-ville referencing some dramatic conclusion.
  • “A new study shows a relationship between variable x and variable y!”
  • The buzz happens on email and in conversations.
  • It hits the mainstream media and gets air time just after the segment on all the horrors in the world and before the video of a golden retriever who can bark the alphabet.
  • The segment ends and those who have just consumed this nugget of “information” in one single whole bite without any sort of active digesting are left questioning their very existence and how everything they have come to know to this point can be completely wrong.

For the strength coach it is tempting to just read the conclusions of a scientific paper and to take it at face value.  It’s great to sit back and to consume information like a snake eating a rat….you swallow it whole and leave the digestion to a later time point by some sort of passive process.  Screw the active digestion where you examine and scrutinize what has been presented.

Gluttonously consuming information in this fashion makes us feel like we are learning something.  It’s brain candy.  It gives us something to tweet, cite, quote, and throw out to the world as fact with very little downside in terms of effort and absolutely ZERO upside in terms of helping to advance anyone’s understanding of the body of knowledge.

I’m going to suggest that a true scientist of his craft will not only take the time to chew his “information meal” thoroughly but that he will also attempt to test his empirically obtained theories and beliefs in some sort of systematic method.

I think it is fair to say that science is not the be all and end all… I would be the first to attest to this.  Science will always trail behind what happens in the gym and on the field.  Empirical evidence will always be easier to obtain.

However, the important step is to transfer what we observe empirically into some sort of testable hypothesis to see if what we observe through our experience really holds up not just in our own studies but through the rigorous scientific study of others.

The ensuing evaluation of the results of our own studies and the studies of others needs to be rigorous and heavily focused on how the study was done, the numbers that came about, and whether or not the numbers really support the conclusions.

It is only in this way that we can truly advance the body of knowledge in sport science, and ensure the Science stays front and centre in Sport Science.

read more
Matt JordanFocusing on the “Science” in Sport Science