We’re diving right back into the world of ADHD research, continuing on from what we were talking about a few episodes back. In this episode, we’re going to be more focused on what goes into making ADHD research reliable. I go in-depth into what you can expect to find when reading a study and then also into what thing to look out for when trying to determine what’s really going on in those studies.
We’ll discuss how to navigate the sometimes confusing world of peer-reviewed journals, why sample sizes matter, and what to watch out for when it comes to conflicts of interest (I mean, everyone is interested in how ADHD research is funded, right?).
This piece was also initially going to cover misinformation, but with how much ended up going into everything else, I’m saving that for next week.
In the last episode, we discussed the kinds of ADHD research, but I think we should also work on understanding what makes good research because not all research is created equal.
When we look into how to evaluate research, often the first thing we hear is that we need to look for research in peer-reviewed journals. Peer review means that other experts in the field—who weren’t involved in the study—have evaluated it before publication. They check whether the research methods are sound, whether the data support the conclusions, and if the study adds something valuable to the field. As a casual reader, ensuring research is peer-reviewed will probably be our most critical step because we’re not equipped to make those judgments nearly as well as someone within the field. We’ll talk more about what makes good research in a minute.
Before we go on to discussing what makes good research, though, I want to play with this idea of directing you to peer-reviewed journals to get your information about ADHD because I actually don’t think it’s particularly good advice. This isn’t to say that peer-reviewed journals aren’t the best source of ADHD research; it’s just that I don’t think that it’s wise to direct someone to peer-reviewed journals as their first source of information. It’s like telling someone the best way to learn to fly a helicopter is by reading the manual. I mean, that’s probably got the best information about how the helicopter works, but it isn’t going to help you with the real-world intricacies of how flying a helicopter really works.
These studies are often hard to read—researchers often use scientific terminology, abbreviations, and statistical terms that can be hard to understand for someone without a background in the field. It can be incredibly easy to misinterpret what the study's authors are talking about if you aren’t familiar enough with the underlying research and terminology. An easy example of this comes from the term “statistical significance,” which means the results are unlikely to have occurred by chance. Someone unfamiliar with that terminology could easily misinterpret that language to mean that the results are large or meaningful in a practical sense. Statistical significance means that this is worth looking at because the results are likely not just due to chance, but it doesn’t mean that there is necessarily a significant practical effect. For example, let’s say researchers are looking at an intervention that can reduce impulsivity, and they find the result is statistically significant; however, the actual difference between the intervention group and the control group might be very small—say, the intervention reduces impulsive behaviors by just 5%. So we can say, yeah, this does have an effect, but not in a meaningful way in real-world scenarios. And this is what makes this lexicographical understanding so important, these words have specific meanings. If you don’t have the background in what those words mean it is incredibly easy to misinterpret those results.
And I do read a fair number of studies from peer-reviewed journals while working on the more science-based episodes. But I think the idea of sending you off to read those studies is garbage advice for someone who is just going to be reading that research casually. It’s not because I don’t think you’re capable; it’s just often more work than it’s worth. I’ve taken classes on how to read research studies, and combing through all that information is still a daunting task.
Now, with that said, it's still important to understand what makes good research. When we’re out and about on the internet, we’re going to run into people citing papers and various studies, and if we want to be able to do some of our own due diligence on whether or not that information is credible, we need to know what to look for. While it’s always great when creators come with receipts for their claims, we also need to understand that not all research is created equal and not all research is done in good faith.
Now, if you’re like me and were a science fair kid, you might be fairly familiar with how a research paper is structured, but if not, let’s go over that real quick. Firstly, after researchers do their studies, they submit those results to a peer-reviewed journal in the form of a paper. These journals often have fairly strict guidelines in how a paper is formatted, so we see this kind of structure. It’s structured in this way to ensure that the research is systematic, reproducible, and valid.
Papers always start with an introduction that outlines the rationale behind the study. The introduction explains why the research is necessary, reviews relevant literature, and highlights gaps in current knowledge. It will also highlight the specific question the study seeks to answer or the hypothesis being tested.
Next you have the methods section that details how the study was conducted. It will include the number of participants, the study design, the intervention being tested, how relevant outcomes are measured and what kind of statistical analysis is being used to analysis the data.
Next, we have results, and those are basically what you’d expect. This is where all the data from the study is presented, often in charts and graphs. You’ll hear about the statistical significance and the effect size here, and we often see data on whether participants adhered to the treatment protocol.
We then get the discussion section, where researchers discuss the meaning of their findings in the context of previous research. They explain whether the intervention was effective, the practical significance of the results, and any broader implications. Importantly in this section researchers should be mentioning limitations of the study, such as small sample sizes, short duration, or potential biases. And there is also often suggestions for future research included here as well.
Finally, we get the conclusion, where the researchers summarize the key findings and their implications. It basically takes everything above and gives you what this all means.
And then papers will also include a reference section and any supplementary information that might be needed in context of the paper.
Additionally a paper is going to include an abstract, which is just a concise summary of a research study or academic paper. It provides readers with a quick overview of the key aspects of the research, helping them decide whether the full paper is relevant to their interests or needs. This is typically the part of a paper you’ll find on something like a Google search. Often journals will let you read the abstract of a paper but then pay-wall the rest of a paper—it’s great.
So when we’re looking to assess research, we’re mostly going to be focused on the methods section, because we want to know if the study was actually measuring what they said they were measuring, but we’ll get into some other specifics as well.
As I mentioned earlier, the first thing we want to look for is if the research was peer-reviewed. Unfortunately, that’s not always the easiest step, while some journals will explicitly mention whether the article has undergone peer review. More often than not you have to look at the journal itself to determine their peer-review policy. Peer-reviewed journals are often recognized by their titles. For example, well-known peer-reviewed journals in various fields include The Lancet (medical), Nature (scientific), and Journal of Attention Disorders (ADHD research).
In these terms, it becomes more important to understand where non-peer-reviewed work usually comes from. Of course, we have things like magazines, newspapers, blogs, and non-academic books. And that’s fine. Those are often meant to be informative or opinion-driven, but typically, they don't undergo rigorous scientific scrutiny.
But what’s more likely to get mistaken for peer-reviewed work are things like preprints where researchers to share their findings before the peer-review process. Or conference papers and posters where researchers are showing off the early stages of their research. You also have companies and organizations often release reports, white papers, and studies that are not peer-reviewed but are used for promotions.
Even within peer-reviewed journals, editorials, opinion pieces, and letters to the editor are usually not peer-reviewed. They often reflect the opinions or interpretations of the author rather than original research.
Non-peer-reviewed sources are often faster to publish and reach a broader audience. They provide value in terms of opinion, accessibility, or preliminary research, but they should generally be interpreted with more caution. So while these non-peer-reviewed sources can have value, its something that as layman readers we should be on the watch for.
All right, now let’s get into what we want to look at when we’re actually looking at these research papers. The easiest thing to start with is sample size—which refers to the number of participants that were involved in the study. As a general rule, larger sample sizes tend to produce more reliable results. When a study involves hundreds or even thousands of participants, we often can be more confident that the findings are representative of the broader population.
But, of course, bigger is not always better. A larger sample size can make it easier to find statistically significant results, but statistical significance isn’t the same as practical or clinical significance. One of the ways this can happen is through a practice called p-hacking, which refers to the practice of manipulating data or selectively reporting results in order to achieve that statistical significance. I don’t want to dive too far into the weeds here, but a p-value is a statistical measure that helps researchers determine whether the results of an experiment or study are significant or could have occurred by chance. The threshold is commonly 0.05, meaning that there’s less than a 5% chance that the results occurred by random chance.
Researchers can manipulate the value of their data through the kind of analysis they are running by cherry-picking their data, ending data collection early, and examining multiple subgroups. With a large sample, it can sometimes be easier to find statistical significance simply because there is more data that can be manipulated this way.
This isn’t too say that this will always be the case with a large study, but it is always something I consider when I see a study with a surprisingly large sample size.
Next up we also want to consider what kind of study that we’re looking at. In clinical research randomized controlled trials (RCTs) are considered the gold standard. In an RCT, participants are randomly assigned to either the treatment group or a control group, and the outcomes are compared. Another way that researchers can reduce bias in the results is by blinding either the researchers or participants (or both), meaning that they don’t know who’s getting the real treatment or the placebo.
Of course, not everything being studied can be done through an RCT, either through limitations of funding or in the scope of what they are looking at. For example researchers might choose to run an observational study where researchers follow a group of children diagnosed with ADHD and a control group of children without ADHD over several years, observing their academic progress without introducing any treatment or changes to their environment. Because researchers are not manipulating any variables (such as introducing a new treatment or intervention), randomization doesn’t make sense, nor is it as important to blind the researchers.
Additionally, you might see cohort studies, where researchers follow a group of people (a "cohort") over time to examine how certain factors (like treatment or environmental exposure) affect outcomes. Or individual case studies where there is an in-depth, detailed examination of an individual or a small group, focusing on their unique characteristics, treatment, or outcomes.
There are also meta-analyses and systematic reviews, which don’t collect new data but instead analyze the results of multiple studies on a particular topic. A meta-analysis statistically combines the findings of several studies, while a systematic review synthesizes the evidence from multiple sources.
So while the RCT is the gold standard for doing clinical research, there are also a ton of other ways that ADHD research is being explored. These other types of studies can provide valuable insights that might not be possible with RCTs.
All right, next up we want to work on understanding the methodology—how the study was conducted. Methodology includes how participants were selected, how data was collected, and how the results were analyzed. For example, was the study using objective measures like brain scans or behavioral assessments, or was it relying on self-reports and questionnaires, which might introduce bias?
Peer review is especially important in this area because, as casual readers, we often lack the knowledge to distinguish between good and bad methodology.
That said, there are things that we can absolutely be on the lookout for; for example, earlier, I mentioned how larger sample sizes aren’t always better, with one reason being that often, with really large sample sizes, researchers rely on survey data, which isn’t always an especially objective measure. A better-designed study might combine these ratings with more objective measures, like neuropsychological tests or performance-based tasks, but that, of course, costs money, so it isn’t as likely with a really large study.
In contrast, a small study with only 20 or 30 participants may not provide enough data to make strong conclusions. That sweet spot in size is going to depend heavily on the study design and method.
Okay, let’s move on and hit one of the biggest elephants in the room for ADHD research, which is funding and conflicts of interest. ADHD research is funded through a mix of government, pharmaceutical, non-profit, and public sources. In the US, government agencies like the National Institutes of Health (NIH) provide substantial funding for basic and clinical research, while pharmaceutical companies focus heavily on medication trials. And it’s that pharmaceutical industry funding that often draws a lot of criticism since they often have a vested interest in the results of the studies they are funding. While industry-funded research isn’t automatically bad, it can introduce potential bias. Researchers might feel pressured—whether consciously or unconsciously—to produce results that are favorable to the company funding the study.
And I’m going to slow down here because I feel like this is one of the places that I think we see the most criticism about ADHD research. I’ve absolutely heard people say that ADHD was made up by big pharma to sell ADHD meds.
Unfortunately I can absolutely see how someone could come up with that idea because there are plenty of examples of these companies putting profits over people. So it makes sense that because these companies make a lot of money selling ADHD drugs that maybe they don’t have the broader ADHD communities best interest in mind when doing ADHD research.
Again, this is a place where peer-reviewed journals are super important. Reputable journals require authors to be transparent about where their funding comes from and any relationships they may have with companies or organizations. Transparency here is crucial, as it helps you gauge whether the results could have been influenced by external factors.
It’s also important that we acknowledge that a lot of the research taht these companies do is government mandated. In the US, the Food and Drug Administration (FDA) mandates that before they approve any of these medications that the companies conduct the research that proves those medications are safe and effective. I’m not going to say that the system is perfect or even above accusations of corruption. But what’s important to understand is that funding for ADHD research comes overwhelming comes from the federal government and they don’t have that same level of motivational bias about what ADHD research ends up saying.
With that said, one thing to also keep an out for here as well is if a study is a part of a consistent body of evidence or is just a single result. Replication is a cornerstone of scientific research. One study might produce promising results, but if those results can’t be replicated in other studies, we can’t fully trust them. This doesn’t mean that we completely disregard those results, it just means that we want to see more research. Its about looking at the broader context of what is already in the ADHD library of research and understanding that the most reliable findings are those that have been repeated and confirmed by multiple research teams in different settings.
Finally we want to consider limitations with in a study. Good research doesn’t pretend to be perfect. Look for studies that openly discuss their limitations. Were there issues with the sample size? Was the study conducted in a specific population that might not apply to everyone? Were there potential biases that the researchers couldn’t fully eliminate?
One thing that you’ll see in some online discourse when talking about science is creators going after a study because it didn’t look at one particular variable or another. And often times if you presented that same argument to the researchers in the study they would whole heartedly agree with the criticism. A single research study can’t cover everything and so this is why we need to look at the broader spectrum of research instead of an individual study.
Acknowledging limitations shows that the researchers are being transparent and cautious in interpreting their results. It also helps readers understand how to apply the findings appropriately and not overgeneralize from a single study.
I’m going to wrap things up here because, well, this has ended up being a much longer piece than I was expecting. It’s like where you’re telling a story and you just keep having to add more and more context, except I got to go back and add things in—makes everything flow so much better when I don’t have to keep interrupting myself going “well, I should also tell you about…” except in this case I didn’t get to any of what I wrote about misinformation in regards to research because that’s now an entire 15 minute episode as well, that we will now be getting into next week.