Is a Scientific Experiment Always Necessary? | Biolayne
  1. Articles
  2. Research
  3. Is a Scientific Experiment Always Necessary?
Is a Scientific Experiment Always Necessary?

Is a Scientific Experiment Always Necessary?

Posted: Written by:

We are lucky enough to live in a time where scientific studies are providing us with a wealth of cutting edge information. World class researchers in the fields of nutrition and exercise science work hard every day to uncover new pathways and strategies that make our lives easier in the pursuit of fitness. Without a doubt, we need these scientists and their valuable discoveries in order to figure out the systems that will guide our training and nutrition programs.

However, the popularity of science in the fitness industry has also made it nearly impossible to make any claims without providing proof in the form of a scientific study. Innovative ideas are often shot down and labeled as “bro science” when a study has not yet been conducted. Unfortunately, scientific studies are often 5 years behind in terms of evaluating a hypothesis. And while peer reviewed experimental study is certainly a great way to evaluate the efficacy of a given strategy, it certainly isn’t the only way to figure out if something has merit. This isn’t to say that you shouldn’t need any scientific justification. But ultimately, we need to ask ourselves whether a direct scientific investigation is the only form of evidence we are willing to accept.

 

Some Things Can’t be Studied Perfectly

Many times, people expect that every single question or curiosity needs to be studied through direct investigation to be considered legitimate. Certainly, if it is possible to study the proposed hypothesis, then of course we should hold some level of skepticism until it has been evaluated. However, not everything can be evaluated through a randomized, double blind, placebo controlled study. Plus, sometimes doing so actually changes the nature of what we are trying to observe, so much so that it no longer applies practically to the real world.

Take for example the use of reverse dieting in bodybuilding. In order for reverse dieting to work, you have to adapt the changes in macronutrient intake to the individual. Some people will respond well to larger increases in caloric intake while others will require smaller increases. However, in science, we are often forced to treat everyone with the same protocols regardless of their individual differences. This is done in an effort to keep things tightly controlled and eliminate any bias or threat to validity. Giving every subject the same increase in calories from week to week will result in great success for some and disaster for others. In the end it may look as though reverse dieting has no significant effect.

Even if we were to base the calorie increase on the amount of weight gained/lost from week to week (thus making it more practical), we can still run into some issues. For example, which macronutrients do we increase? As a coach, you find there is an art to figuring out whether you should increase fats, carbs, or protein for each client. This is something that can’t really be done in research because it would confound the results of the study. This means that we have to modify the true nature of reverse dieting in order to study it directly. If the results of this theoretical study showed that reverse dieting had no significant effect on post diet weight control, we would mistakenly label it as useless.

biolayne membership ad

 

RCT’s are Not the Only Way

When it comes to the type of study we like to see conducted, the randomized control trial (RCT) is the king. Many scientists and science enthusiasts alike will accept nothing less than a RCT as evidence to back up a claim. Start throwing single case, case study, or observational designed studies at them and they will laugh in your face. But why is this the case? Why is it that we are unwilling to accept any evidence unless it comes from an RCT?

It all comes down to how valid the results of a study are said to be. The biggest strength of a randomized control trial is that it promotes a very high level of validity on the results. Essentially, RCT’s help you feel very confident that you are observing a true cause and effect relationship between the variables being studied. Other types of research designs are thought to promote a much lower level of validity. Instead of studying a direct cause and effect, observational studies and other experimental designs have to rely on either correlation or bullet proof design in order to infer a cause and effect relationship. This is thought to be a much weaker form of evidence. The phrase “correlation does not equal causation” is used to poo poo these study designs in the scientific community. But are the results of these “inferior” study designs really a bunch of garbage?

In reality, the results of correlational designs can be quite useful to us in establishing cause and effect. For example, how do we know that smoking is bad for our health and causes cancer? We didn’t conduct a RCT where we had one group smoke for 30 years and another refrain from smoking. We had to use observational research to come the conclusion that smoking causes cancer. Thankfully, a very smart scientist by the name of Sir Austin Bradford Hill proposed nine causal criteria which scientists have used for more than 50 years to establish whether a cause and effect relationship exists [1, 2]. Experiment (as in an RCT) represents one of these nine criteria. However, there are still eight other criteria that we can use in order to evaluate a supposed cause and effect relationship.

 

Using Hill’s Criteria in Fitness

So, if we don’t have an RCT that backs up a given training or nutrition claim, we can use these causal criteria to guide our evaluation. The 8 criteria (besides experiment) are as follows:

  • Strength – The stronger the association, the greater the likelihood there is a cause effect relationship.
  • Consistency – The apparent cause and effect relationship can be observed several times, in different people, and in different scenarios.
  • Specificity – You can narrow down the effect to one specific cause and vice versa.
  • Temporality – The supposed cause occurs before the observed effect (cause precedes the effect).
  • Dose-Response – A greater magnitude of the cause generally results in a larger magnitude of effect.
  • Plausibility/Coherence – A plausible mechanisms exists which can explain the supposed cause and effect relationship (these two are nearly the same which is why I combined them).
  • Analogy – Similar factors that result in a similar cause and effect relationship can add to the body of evidence for the cause and effect relationship in question.

Hill stressed the fact that although experiment is probably the strongest criteria, no one criteria can outshine all the others. This means that if 8 of the 9 criteria point to a legitimate cause and effect relationship, you can feel secure believing in that relationship. Furthermore, this may be true even if an experiment does not show a statistically significant effect. This gives us a whole new lens through which we can evaluate science!

When a coach or fitness influencer makes a claim about a certain strategy or phenomenon, you can rely on these criteria to make a decision for yourself about its legitimacy. Better yet, you can ask them directly to defend their claims by providing evidence within these domains. If they are unable to provide any justification, then you should probably remain skeptical and chalk up their claim as potential bro science.

However, if you or the person defending his or her claim can provide evidence in multiple causal criteria, then you may want to take it more seriously. Instead of dismissing them because they don’t have an RCT to defend themselves, go through these causal criteria and evaluate things on a deeper level. Doing this is actually a more responsible way to support the scientific method. We should remain skeptical, but not so much that we continually shoot down legitimate ideas that do not yet have the experimental evidence.

 

Conclusion

There is no doubt that experimental evidence is one of the most robust ways to give weight to a hypothetical claim. When it is possible to evaluate a hypothesis through direct experimentation then it should certainly be done. However, it is not always possible to perform an experiment. Unfortunately, science has become so biased to the RTC that alternative forms of causal evaluation have been lost, but we have to remember that plenty of causal relationships have been discovered without the use of classic experimentation.

This isn’t to say that we should not value experimental evidence or that we shouldn’t evaluate people’s claims in a scientific way. However, using the causal criteria laid out by Bradford Hill can help us to evaluate claims that do not have experimental evidence behind them. Applying eight of the other nine criteria can go a long way in looking at things objectively and scientifically. This will help all of us to escape the dogmatic nature of RCT reliance and work to progress the scientific community for the better.

 

References

  1. Hill AB. The environment and disease: association or causation?
  2. Fedak KM, Bernal A, Capshaw ZA, Gross S. Applying the Bradford Hill criteria in the 21st century: how data integration has changed causal inference in molecular epidemiology. Emerging themes in epidemiology. 2015 Dec;12(1):14.

About the author

About Andres Vargas
Andres Vargas

Andres is a strength and nutrition coach and the owner of The Strength Cave, an online fitness coaching company. He holds a Master's degree in Exercise Science and is currently studying for a PhD in Sport and Exercise Science. His goal is to blend science and real world application in order to provide the best...[Continue]

More From Andres