SPORTSCIENCE · sportsci.org

Latest issue
News & Comment / In Brief
This issue

 

 

Exercise Physiology: the Last 2500 Years

Will G Hopkins, College of Sport and Exercise Science, Victoria University, Melbourne, Australia. Email. Reviewer: Frank Katch, Santa Barbara, Califormia. Sportscience 18, i, 2014 (sportsci.org/2014/inbrief.htm#history). Published May 2014. ©2014

During the first two years of the Sportscience site, Frank Katch contributed a series of original and insightful articles on history makers in the science of sport and exercise nutrition. Frank has now officially retired, but he still actively co-authors the popular McArdle, Katch and Katch text, Exercise Physiology: Nutrition, Energy, and Human Performance. On a recent visit to New Zealand he showed me the proofs of the most recent (8th) edition, including the introductory chapter on discoveries and developments in the field of exercise physiology from the time of the ancient Greeks to the 20th century. At my request he gained the publisher's permission to provide free access to the PDF. Here is the link: Introduction–A View of the Past. If the download stalls, see below. You can also access an appendix on landmark publications in exercise physiology and another on famous female scientists. The publisher has even offered a generous 20% discount for Sportscience visitors to purchase the book via this link. Enjoy!

Incredibly, owing to a bug in the Internet Explorer/Adobe combination, some downloads of some PDFs stall part-way through. Solution: right-click on the link and Save As… to a convenient location, then open.

 

Magnitude-Based Inference Under Attack

Will G Hopkins, Alan M Batterham, College of Sport and Exercise Science, Victoria University, Melbourne, Australia; School of Health and Social Care, University of Teesside, Middlesbrough, UK. Email. Reviewer: Martin Buchheit, Paris Saint-Germain Football Club, Paris, France. Sportscience 18, i-ii, 2014 (sportsci.org/2014/inbrief.htm#MBI). Published Nov 2014. ©2014. Reviewer's Commentary

Magnitude-based inference (MBI) is the approach to making conclusions about sample-based effects that we have promoted over the last decade or so (e.g., Batterham and Hopkins, 2006; Hopkins et al., 2009). In essence, we realized that a sample never produces the exact value of an effect, but for a given effect in a given study, the uncertainty might be acceptable. Uncertainty is represented by the confidence interval or chances of benefit and risk of harm. An effect with acceptable uncertainty is clear, which means its magnitude is sufficiently well defined to be worth reporting in various ways that convey the uncertainty in the magnitude. An effect with too much uncertainty is unclear, and the best conclusion is that you need more data.

Our colleagues who use and understand MBI know that it is far superior to the traditional approach of null-hypothesis significance testing, which addresses the question only of whether the true effect could be null (zero). Generations of statisticians before us have also criticized the null-hypothesis test, but magnitude-based inference appears to be the first practical alternative that properly takes into account the uncertainty arising from sampling variation.

In July this year an article critical of MBI was e-published ahead of print in Medicine and Science in Sports and Exercise, authored by Welsh and Knight (2014).  We wrote a letter to the editor pointing out all the mistakes and deficiencies in the article, but the journal has a policy that letters about articles cannot be published until the article itself appears in final form in the journal, in this case next April. Meantime we are hearing from colleagues that reviewers of their manuscripts are citing Welsh and Knight as sufficient reason to insist on removal of MBI. 

We cannot publish our letter verbatim here, but to reassure researchers who use MBI and to provide them with something to address the concerns of reviewers, we are summarizing the most important points, as follows…

    MBI is definitely not a form of null-hypothesis significance testing (NHST). In particular, the Type 1 and 2 errors of MBI are conceptually distinct from the Type I and II errors of NHST. MBI Type 1 is the risk of declaring a marginally harmful effect beneficial; NHST Type I is the chance of declaring a null effect significant. MBI Type 2 is the chance of declaring a marginally beneficial effect non-beneficial; NHST Type II is the chance of declaring a marginally beneficial effect non-significant.

    The rates of so-called false discoveries of clear substantial effects, when the true effect is null, are actually much less than Welsh and Knight presented. In any case, when such discoveries are presented as "possibly" substantial (and therefore also possibly trivial), they are arguably not false.

    The interpretation of the confidence interval as the likely range for the true effect is Bayesian and valid (Burton, 1994; Burton et al., 1998; Spiegelhalter et al., 2004).

    MBI is actually a Bayesian form of inference, specifically a "Reference Bayes" method, in which the conventional confidence interval is combined implicitly with a non-informative prior belief (Burton et al., 1998). A non-informative prior is appropriate, because prior information is usually too vague to quantify in a trustworthy fashion (Burton et al., 1998).

    Contrary to what Welsh and Knight imply at the end of their article, we have not ignored the issues of data structure, multiple covariates, and the distribution and scale of the outcome variable. Incredibly, they even imply that we have not attended to effect size.

    The sample-size estimates in the spreadsheet at Sportscience are correct. Those for NHST were checked with G*Power (Faul et al., 2007); the check on MBI sample sizes is the fact that the chosen Type 1 and 2 errors are equal to the chances of harm and benefit shown in the spreadsheet for a marginally clear outcome.

    With suboptimal sample sizes, clear effects are more frequent than statistically significant effects. Researchers will therefore get more of their studies into print, and  publication bias will decline, if clear rather than significant is a criterion for publication.

    Welsh and Knight suggest that confidence intervals or a fully Bayesian method (i.e., with an informative prior) should be used to make inferences, but like everyone else who criticizes NHST, they do not say how researchers should make decisions about their effects.

We have seen the response of Welsh and Knight to our letter, but it amounts only to a denial of our assertions about their mistakes.

Reviewer's Commentary

 

Batterham AM, Hopkins WG (2006). Making meaningful inferences about magnitudes. International Journal of Sports Physiology and Performance 1, 50-57. Available here.

Burton PR (1994). Helping doctors to draw appropriate inferences from the analysis of medical studies. Statistics in Medicine 13, 1699-1713

Burton PR, Gurrin LC, Campbell MJ (1998). Clinical significance not statistical significance: a simple Bayesian alternative to p values. Journal of Epidemiology and Community Health 52, 318-323

Faul F, Erdfelder E, Lang AG, Buchner A (2007). G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods 39, 175-191

Hopkins WG, Marshall SW, Batterham AM, Hanin J (2009). Progressive statistics for studies in sports medicine and exercise science. Medicine and Science in Sports and Exercise 41, 3-12. Available here.

Spiegelhalter DJ, Abrams KR, Myles JP (2004). Bayesian Approaches to Clinical Trials and Health-Care Evaluation. Wiley: Chichester, p. 112

Welsh AH, Knight EJ (2014). "Magnitude-based inference": a statistical review. Medicine and Science in Sports and Exercise [Epub ahead of print, 21 July]

 

———–