Please reload

Recent Posts

The Pregnancy Files #2: Advice From a Pregnant Physio

January 7, 2020

1/10
Please reload

Featured Posts

#fakenews

January 9, 2018

We’ve all seen the clips of President Trump bellowing “FAKE NEWS”, with the finesse of a spoiled 5 year old demanding candy.

There may not be many issues that you and America’s outspoken leader agree on, but I think he does (kinda) have a point when it comes to critiquing information from the media. Not in the conspiratorial, outright falsification of information to promote a political agenda kind of way, but in the interpretation and distribution of information from scientific studies.

 

Health and medical fields are driven by a rigorous scientific process. Research is conducted and studies are published in various scientific journals where their information and conclusions can be accessed by professionals in related fields. This information tends to be, well, scientific - and the complexity and dryness doesn't make it well suited for the general population. This is where the media comes in, acting as the middle-man for information distribution. News outlets filter through the research and cherry-pick the pieces they think will be most interesting and relevant. Then they proceed to summarize the information and present it in an eye-catching and easily digestible manner. This is where a few problems can occur.

 

The biggest issue is that all research is not created equally. Good, high quality research has to account for every possible variable and take safeguards against different sources of bias that might skew results. Studies of this nature are incredibly difficult to conduct because they tend to be very time-consuming and expensive. Because of these limitations, moderate and even low quality research is often published. This isn’t to suggest that the research is inherently bad, but means that the findings need to be interpreted with caution as there is more chance that they might not be true.

 

Unfortunately, research quality is often the first thing lost in translation. A journalist reporting on a scientific study should, first and foremost, consider the methodology of the study. However, analyzing research quality is a skill unto itself and can be quite difficult. The focus tends to shift towards the conclusion of the study, especially if it’s something big and juicy that will make for a good headline.

 

A great example of this happened a few years ago. A scientific journalist set out to prove how easy it was for poor science to reach the mass population. He conducted a randomized controlled trial (the most trusted type of study) that uses an experimental group and compares them to a control group who does not undergo the same intervention. His study looked at the effect of bitter chocolate used as a dietary supplement. The subjects were divided up into 3 groups; two experimental group that ate the same low-carb diet, one with and one without 1.5 oz of dark chocolate each day. The third group was the control that was not given any specific diet. Now, you may already know the outcome of this study. Headlines like "Scientists Say Eating Chocolate Can Help You Lose Weight" and "Need A 'Sweeter' Way to Lose Muffin Top? Eat Chocolate" blew up social media. People were amazed - science says something that is generally considered bad for you is now good! Hooray! Once the study went “viral”, it literally reached millions of people who now had a legitimate, science-based excuse to each chocolate.

 

But not so fast...a closer look at the study reveals a few crucial issues. (Keep in mind, that the entire study was conducted legitimately, without any tampering and manipulation of data.) The biggest downfall is the sample size - the entire study only recruited 15 people, which left each of the three groups with only 5 people. Scientific research is analysed statistically and the outcomes rely on average values. Individually, we can have huge variety so to get a “true” average, we need to sample a fairly large group. An average of only 5 could be completely skewed by just one person’s data. Already this is a huge flaw in the study - regardless of the outcome, the size is just too small to really be certain that the results are truly a reflection of eating chocolate or perhaps just a few individual’s reactions to the different diets.

 

The second major issue with this study is it’s vagueness. Most of the media reports mentioned that eating chocolate reduced weight and helped lower cholesterol - which is what the data showed. BUT, the main purpose of the study wasn’t specifically designed to look at weight loss or cholesterol, it was just to study the effect of eating chocolate. All in all, they actually looked at 18 different variables including sodium levels, overall well-being, and sleep quality. Coupled with the small group sizes, this is the scientific equivalent of throwing sh*t at the wall and seeing what sticks. By monitoring 18 different outcomes, they greatly increased their chances of finding differences between groups and any of the outcomes could have been spun into a positive story.  To make the study more valid, they could have focused on just weight loss as the primary goal but would have had to take extra precautionary measures throughout. For example, we know that weight can vary based on hydration levels, time of day and hormone variability so all of these things would need to be controlled to conclusively say that eating chocolate can influence weight loss.

 

Besides the poor research methods, there are also issues with how this study made it past the peer-review process in the first place (read the original article if you want to know more). In a perfect world this study should never have made it to print. The scientific journalists who originally received and published the article are also to blame, as it’s their job to analyze this type of thing so you don’t have to. However, the bottom line is that those failsafes don’t always work and bogus information like this is able to reach the masses.

 

As I mentioned before, screening research is a skill and no one can be expected to source and critique the original research for every headline they see. However, here are some helpful questions you can ask to perform your own quick critical assessments:

 

1. How many subjects participated?

 

Check to see how many people participated in the study. There is no set number that makes a study good or bad but in general, the more the better!

 

2. Who was studied?

 

Seeing who a study was performed on can make a big difference in whether the results are even applicable to you. Many studies are conducted on animals, not humans. While this is super important early-stage research there are a lot of differences between rats and humans and the findings may have absolutely no relevance to you. Even human studies aren’t always generalizable. What was the sex, age, training level, health, etc. of the studied group. The results from a study on elite athletes might not be very relevant for your arthritic grandma.

 

3. What kind of study was conducted?

 

Systematic reviews and meta-analysis are the compilation of a bunch of research and are accepted as the highest level of evidence. Even so, they are only as strong as the research that they are based off of. Garbage in equals garbage out.

Randomized-controlled trials are the strongest single study research but as you saw above, are still susceptible to error. Many other study designs exist but are more susceptible to bias.

Beware of “expert opinion”. This is generally the lowest quality of evidence and means that the information is purely anecdotal as there have not been any formal scientific studies.

 

4. Are the results meaningful?

 

The word “significant” is often used in scientific research where it has a more specific definition than it does in everyday use. In science, significance is a statistical term. When a significant difference is found, it means that we can be pretty confident (usually 95-99%) that a result is a true difference and not just due to chance. This significant difference doesn’t necessarily mean that the result will be significant to you. For example, a study looking at weight loss may have found a 1kg difference over 6 weeks. Depending on the other data, this number could be considered statistically significant but for the average person looking to lose weight, 1 kg in 6 weeks time is hardly meaningful.

 

5. Where did the research come from?

 

Looking for where an article was originally published can lead to its credibility. Large, reputable news agencies are more likely to employ scientific journalists to accurately present their content. With such rapid access to information, news stories are repeatedly picked up and reproduced by rival companies, where valuable information can become lost like a big game of telephone. By the time a research paper is distilled to #3 on a Buzzfeed listicle something has probably been lost in translation.

 

6. Beware of Definitives

 

In your quest to critique information, beware of absolutes and bold claims. Headlines are designed to be sexy and eye-catching to get your attention but in reality, most research is much less glamorous. The outcome from many systematic reviews (the gold standard for evidence) is often (and frustratingly) that there is insufficient evidence and that more research is needed. The world of health and fitness in particular is filled with grey areas. Any black and white definitive claims should be treated with a healthy dose of skepticism.

 

Next time you see a headline that looks a bit too good to be true, give the article a second look. Ask some of these questions to see how the article holds up before you hammer that ‘share’ button and upset Trump.

 

Do you need a hand getting on top of your strength, or dealing with an injury that is hindering you? Book online here, or call us on (08) 9448 2994

 

Please reload

  • Instagram - White Circle
  • Facebook - White Circle

© 2019 by Stoke Physio

114B Flora Terrace NORTH BEACH WA 6020 || (08) 9448 2994