When Good Science Goes Bad

When Good Science Goes Bad

Let's start this episode with a question.

Is coffee good for you?

I'll give you a minute to gather your thoughts on this one... well actually I'm pretty sure that we all had a knee jerk reaction one way or another. So now I'm going to ask, how do you know? What led you to your decision on coffee? Perhaps you read an article that supports your daily pick me up, or perhaps you don't care if it's good for you. And I'm sure at least some of you landed on, "well it depends" because yes having 1 cup of coffee in the morning is very different from having 37 cups. Cause, that's just too many cups - I mean, at least I think it is.

Regardless, we've all got these ideas in our heads about facts that we know. Sometimes that information is right, sometimes... well I used to believe I had to worry about sharks in the pool.

The point here is that where we get our information from matters. But more than just going to trusted sources, we've also got to make sure that we aren't falling to easily manipulated data. We've got to go beyond just reading headlines - which I know I am guilty of all the time.

In 2015 there was a lot of buzz around a scientific study showing that chocolate could help you lose weight. Major news outlets all over the world picked up the story, but the problem was that it was junk-science. And I'm not mincing words here. This was a study specifically designed to show the flaws in science reporting - as in they purposely used bad science in their research to demonstrate how some science journalist don't always do their due diligence.

It was actually brilliantly done as they conducted a real study, and their data did show that while on a low-carb diet and eating chocolate, you would lose weight 10% faster than the other participants in the study. And that last piece there is a crucial piece of how this study was able to get these results. It didn't actually show anything about the general population. And that's because one of the methods they used to get this result was by having a small sample size. The study only had 15 participants who were split into three groups (one low-carb, one low-carb with a bar of chocolate, and a control) - which means each group only had five people in it.

The problem with a sample size like that is that it is incredibly hard to separate correlation from causation. I'm sure you've heard that phrase before and it just means that because something happened, we don't know always know what caused it. For example, if I went to get ice cream every day for a week and it rained every day I went - well I'd have a pretty strong argument that me going for ice cream was making it rain, right? Well, perhaps, me living in Washington State is a much better explanation for the rain - but none the less, from my data I can show that going to get ice cream means that it will rain.

But that's the point here - if instead of just looking at weeks worth of data, I looked at data from over a few months, I'm pretty sure that 1. I'd be sick of getting ice cream and 2. that it didn't rain every time I went. When you don't have enough data points, it becomes a lot easier to see trends that aren't really there. While my example takes this to the ridiculous, we still see this kind of stuff in some science reporting. Most journals won't accept studies with fewer than 30 participants now - but that's still a low number. I'm not saying every study needs to be gigantic. Big studies require a lot of funding that most researchers won't have access to. As well, for more extensive studies to happen, they need smaller studies to happen first. Scientists need a jumping-off point to do the big stuff. And as consumers of science, we've got to understand that we can't base everything off those initial studies.

This isn't to say that just having more participants is going to make a study better. It is also crucial to look at how the data is being collected and what is being looked at. In the "fake" chocolate study, the researchers we're looking at 18 different measurements. From weight to cholesterol to sodium to sleep quality and even just over all well-being, the study packed in a ton of variables to look at. This sounds great on the outset because, hey look at all the data they could be collecting - but in this case, the researchers were doing something called "p-hacking." Basically when doing statistical models, the p-value is the probability of obtaining results - yes, it is more complex than that, but we'll leave it at that for simplicity's sake. With the limited number of subjects in the study and all the variables that they were measuring, they were all but guaranteed to find something with statistical significance.

Most of the time, researchers aren't trying to use these tricks to deceive people. Scientists will add more variables as they run the experiment because they have limited resources and are trying to pack as much as they can into one study. Unfortunately, as you add in more variables, you also add in a lot more random chance. This is why we've seen a ton of studies with really questionable results that are technically statistically significant.

One group of scientists went as far as showing that listening to the Beatles song "When I'm Sixty-Four" could take a year and a half off your life.

And all of this is important because science is having a bit of a replication crisis as well right now. Typically for a study to really hold weight it needs to be replicated, but most studies never even are attempted to be replicated. And just real quick - replication means that a different group of scientists take the methodology of one study and do the whole study over again. Replication allows scientists to see if the results were a result of the methodology or were just a fluke of some unknown variable.

Unfortunately, we're not seeing a lot of replication in science right now because it is tough to get funding to reproduce a study. If you were handing out grants, would you be giving more money to cutting edge new ideas or to someone trying to prove something that has already been shown to be true? I wish we'd give money to more replication studies, but it's human nature to want the new thing.

Again most scientists aren't doing this on purpose - it is usually just some sort of oversight or just a lack of funding that creates this bad science. Of course, there are also bad actors using bad science to push there own agenda - but fortunately for us, we can use our knowledge of how bad science works to sniff out both sides of this.


Armed with this knowledge, I know it's easy to conclude that we just need to stop trusting science. And that's a problem. Science is our best resource for reliable information. So instead of turning our trust from science, we've got to be better at judging science for ourselves.

This means taking more time to understand what we're reading and making sure that we're not passing on bad information. I talked about a few of the ways that we can identify problems with scientific studies earlier - like small sample sizes. But there are a host of things we should be looking at when we're reading about science.

The easiest thing that we can do is go beyond just reading headlines. Unfortunately, in our clickbait world, many journalists use catchy headlines to draw your eyeballs, even if those headlines aren't 100% accurate. And I know I'm guilty of repeating some headline I glanced at while scrolling through Reddit. It's easy to do because, hey, why would someone be reporting on this if it wasn't true. No one would just go on the internet and tell lies, right?

So this brings up the question if you should be going and just looking at the scientific papers themselves. Well, if that's your thing, have at it, but let me tell you, most of the papers are not ADHD friendly. I did my fair share of science classes in college and learned how to read through a science journal, and I still often get lost. In an ideal world, it would be great if we could just look at the original research, and it was easy to read - but often, researchers are writing for other researchers. They will write their papers in ways that are the easiest for their colleagues to read, not for laypeople. That means these papers are full of jargon and can be hard to parse. Not to mention the difficulty you can have accessing some of this research.

So this means that we're often going to be relying on science reporting - and as we saw earlier with the chocolate study, that can have problems. Our best bet is starting with a trusted source. And I absolutely do not mean major news outlets, who are unfortunately some of the worse offenders at propagating bad science.

And we don't want to just google whatever question we have and hope that whatever pops up first is reliable. As a hint, if the site has ads for celebrity gossip, it probably isn't too reliable.

Beyond that, we can look at what information the articles we are reading are supplying. At the minimum, we're going to want to know the basics of what a study entailed. How many people were involved? How long did it go on? Was there a control group? Also, does the article at least cite the paper it is talking about? If it isn't at least linking to the research, that should be a red flag.

Then we can look at some of the wording of the article to help us understand how much we should trust the results. If we see words like "prove," then we can safely assume that whatever was proven, absolutely was not proven. Science doesn't prove anything - especially in just one study. Another red flag comes from reporting that uses overly jargon-istic writing. Often at first this seems like it makes total sense, the reporter is using language that the researchers would use. And in some cases that is true, but often we're also getting fed words that are being used in the wrong way. This may not seem like a big deal, but it can lead to some serious misinterpretations of results.

We've also got to look for things like potential conflicts of interest from the researchers. This is actually a huge issue we often hear about with ADHD because so much ADHD research is funded by the pharmaceutical industry. That is absolutely a conflict of interest; however, ADHD is also one of the most widely studied conditions. Which brings me to my next point, with the issue of lack of replication one of the best ways to understand if a study is valid is to look at how many other studies are also in that field. Because ADHD has so much research behind it, we can trust a lot more of data that comes out because we at least have similar results as corroborating evidence.

Even with all this information, we're still going to fall prey to bad science sometimes - that's okay; we can learn from it and change our opinions. That's even how science works. You start off with a hypothesis and then test that hypothesis over and over again - if your hypothesis is shown to be wrong then you update your hypothesis, you don't just stick with the bad information because that's what you want to believe. Okay, maybe some of us do cling to bad information sometimes, yes I'm still sometimes scared that there might be a shark in the pool.


I think more than ever, it is important for us to make sure that we're aware of the statistics we're reading what they mean. Everyone wants answers about COVID-19, but unfortunately, we're still really early on in this crisis. That means while scientists are scrambling to get a lot of good information out to us, there is also a lot of bad information coming at us as well.

So our best defense right now is making sure that we are being cautious about what information we are following. Make sure that if you are passing information on, that it is coming from a trusted source. Make sure that the information that you are taking in is valid. Be skeptical of things that don't sound right or sound too good to be true. Be skeptical of things that you see coming from your friends on social media - where did they get that information?

One of my favorite podcasts that I've been listening to keep up to date on what's going on with the virus always reminds me in the beginning that what they're discussing today might not be relevant in a week. There's a lot of information, and it's always changing. I mean, just a few weeks ago, the CDC was telling us wearing masks wouldn't help, and they just revised the recommendation that everyone should be wearing them.

The point here is that when you're getting new information, don't let that be your only source of information. Don't let that information make you panic. Panic is bad - with ADHD we're already prone to impulsivity, and panic is only going to make that worse. If we can slow down and work out what we actually need to do we're going to be on much better footing and you know, not be the ones hoarding toilet paper.

This Episode’s Top Tips

  1. While most scientist aren't trying to create bad science, lack of funding and time can make many studies suspect. To help validate claims, read into the study methodology and see what other research supports those claims.

  2. Make sure that you are reading beyond just headlines. Many over zealous reporters will embellish headlines to garner more clicks.

  3. Watch for words like "proved" about science. Science doesn't prove anything, it just creates evidence that supports a claim or refutes it.

  4. Be skeptical of claims that seem to good to be true, they usually are.

Links

Shark Pool: Official Trailer (NSFW)

I Fooled Millions Into Thinking Chocolate Helps Weight Loss. Here's How

Use of Cloth Face Coverings to Help Slow the Spread of COVID-19

A Rough Guide to Spotting Bad Science

Best-Laid Plans and COVID-19

Best-Laid Plans and COVID-19

Controlling What You Can When Everything Feels Out of Control

Controlling What You Can When Everything Feels Out of Control