Skip to main content
TherapyExplained

Understanding Therapy Research: A Consumer's Guide to Evidence

A plain-language guide to understanding therapy research — from clinical trials and meta-analyses to reading research claims and finding reliable information.

By UnderstandTherapy Editorial TeamApril 2, 202618 min read

Why Understanding Research Matters for Your Care

When you decide to start therapy, you are making a healthcare decision. Like any healthcare decision, the quality of information behind that choice matters. But unlike choosing a medication, where your doctor consults prescribing guidelines backed by decades of pharmaceutical research, choosing a therapy approach often depends on what your therapist was trained in, what they are comfortable with, or what they happen to offer — not necessarily what the research says works best for your specific concern.

This is not because therapists are careless. Most are deeply dedicated to their clients. But the gap between what research shows and what happens in practice is real, and it affects the care you receive.

50%

of therapists consistently use evidence-based treatments in their practice
Source: Clinical Psychology Review

That statistic is not meant to alarm you. It is meant to empower you.

79%

of therapy clients improve more than the average untreated person
Source: Wampold, 2001; originally Smith & Glass, 1977

When you understand how therapy research works — even at a basic level — you become a more informed consumer of mental health care. You can ask better questions during consultations, evaluate claims you encounter online, and advocate for treatments that have strong evidence behind them.

Research literacy is not about becoming a scientist. It is about developing the confidence to ask, "What is the evidence for this approach?" and knowing enough to understand the answer.

This guide will give you the foundation to do exactly that. You do not need a science background. You do not need to read academic journals. You just need to understand a few key concepts, and you will be better equipped to make informed decisions about your mental health care.

The Evidence Hierarchy Explained

Not all research is created equal. Scientists organize evidence into a hierarchy based on how confident we can be in the results. Understanding this hierarchy helps you evaluate the claims you encounter — whether from a therapist, a website, or a news headline.

Think of it as a ladder. The higher you go, the more reliable the evidence tends to be, because the research design at each level does more to reduce bias and increase accuracy.

LevelWhat It IsStrengthLimitationExample in Therapy
Systematic Reviews & Meta-AnalysesCombines results from multiple studies into a single analysisMost comprehensive; reduces individual study biasQuality depends on the studies includedCochrane review of CBT for depression across 100+ trials
Randomized Controlled Trials (RCTs)Participants randomly assigned to treatment or control groupStrongest individual study design; establishes cause and effectExpensive; may not reflect real-world conditionsRCT comparing EMDR vs. waitlist for PTSD symptoms
Cohort / Observational StudiesFollows groups of people over time without random assignmentUseful for studying long-term outcomes and real-world practiceCannot prove cause and effect; may have confounding variablesStudy tracking therapy outcomes for veterans over 5 years
Case Studies / Case SeriesDetailed reports on individual clients or small groupsValuable for rare conditions and generating new hypothesesCannot be generalized; no control groupCase report of a novel therapy approach for a rare phobia
Expert Opinion / Clinical ExperienceRecommendations based on professional judgment and experienceDraws on years of clinical practice and pattern recognitionSubject to individual bias; not systematically testedA therapist's recommendation based on treating similar clients

A few important nuances about this hierarchy:

Higher does not always mean better for every question. A systematic review is the strongest form of evidence, but it can only exist for topics that have been studied enough. For newer approaches, underrepresented populations, or rare conditions, lower levels of evidence may be all that is available — and that does not mean the approach is worthless.

The hierarchy describes confidence, not truth. A single well-designed RCT can overturn a widely held expert opinion. Conversely, a poorly conducted RCT can produce misleading results. The quality of the study matters as much as the type of study.

Real-world clinical decisions draw from multiple levels. The best care happens when research evidence, clinical expertise, and your personal values all inform the decision together. We will come back to this idea later in the guide.

Randomized Controlled Trials: The Gold Standard

You will hear the phrase "gold standard" used a lot when people talk about therapy research. It refers to the randomized controlled trial, or RCT. Understanding what an RCT is — and what makes therapy RCTs uniquely challenging — will help you evaluate a large portion of the research you encounter.

How an RCT Works

In a randomized controlled trial, participants are randomly assigned to one of two or more groups. One group receives the treatment being studied. The other group — the control group — receives either a different treatment, a placebo, or no treatment at all.

Random assignment is the key feature. It ensures that the groups are as similar as possible at the start of the study. If the treatment group improves more than the control group, we can be more confident that the treatment itself caused the improvement, rather than some other factor like age, severity of symptoms, or motivation.

Why Therapy RCTs Are Different from Drug RCTs

In drug research, the gold standard involves "double-blinding" — neither the participants nor the researchers know who is getting the real drug and who is getting the placebo. This eliminates the possibility that expectations alone are driving the results.

In therapy research, double-blinding is essentially impossible. You know whether you are sitting in a room talking to a therapist or sitting on a waitlist. Your therapist knows whether they are delivering CBT or supportive counseling. This does not invalidate therapy RCTs, but it is a limitation worth understanding.

Common Control Groups in Therapy Research

Researchers have developed several creative solutions to the blinding problem:

  • Waitlist control: One group receives therapy immediately while the other waits. This shows whether therapy is better than no treatment, but participants on the waitlist know they are waiting, which can affect their symptoms.
  • Treatment as usual (TAU): The control group continues whatever care they were already receiving. This tests whether the new therapy adds value over standard practice.
  • Active comparison: Both groups receive a real therapy, and the study compares which one produces better outcomes. This is the most informative design because it answers the practical question: "Is therapy A better than therapy B for this condition?"

300+

randomized controlled trials support the effectiveness of CBT for various mental health conditions
Source: Hofmann et al., 2012; Journal of Cognitive Psychotherapy

When you encounter a claim that a therapy is "evidence-based," it almost always means that the therapy has been tested in one or more RCTs and shown to outperform a control condition. This is a meaningful benchmark, but as we will explore next, even stronger evidence comes from combining the results of multiple RCTs.

What an RCT Looks Like in Practice

To make this concrete, consider a real-world example. A research team recruits 200 adults diagnosed with major depressive disorder (MDD) from outpatient clinics. Participants are randomly assigned to one of two groups: 100 receive 12 weekly sessions of cognitive behavioral therapy, and 100 are placed on a waitlist to receive treatment after the study concludes.

Before treatment begins, during the 12 weeks, and after treatment ends, all participants complete the Beck Depression Inventory (BDI), a widely used standardized measure of depression severity. The researchers compare changes in BDI scores between the two groups.

The results show that 60% of participants in the CBT group meet the criteria for treatment response — a meaningful reduction in depressive symptoms — compared to 25% in the waitlist group. The study is published in a peer-reviewed journal, where other researchers can examine the methodology, scrutinize the statistical analysis, and attempt to replicate the findings. This kind of study is what clinicians and guideline panels draw on when they recommend CBT as a first-line treatment for depression.

Meta-Analyses and Systematic Reviews

If a single RCT is like a single experiment, a meta-analysis is like an experiment of experiments. It takes the results from many individual studies and combines them mathematically to produce an overall conclusion. This matters because individual studies can disagree — one RCT might find that a therapy works while another finds no effect. A meta-analysis helps resolve these contradictions by looking at the full body of evidence.

How They Work

A systematic review begins with a structured search of all available research on a specific question. Researchers define their search criteria in advance, identify every relevant study, evaluate the quality of each one, and summarize the findings. This process is designed to be transparent and reproducible — anyone following the same steps should reach the same conclusions.

A meta-analysis goes one step further by statistically combining the results of the included studies. This produces a single number — called an effect size — that summarizes how large the treatment effect is across all the studies.

What Effect Sizes Mean for You

Effect sizes are reported using standardized numbers, but you can think of them in practical terms:

  • Small effect (d = 0.2): The therapy produces a real but modest improvement. You might notice a slight shift in symptoms, but it is subtle.
  • Medium effect (d = 0.5): The therapy produces a noticeable improvement. Most people who complete the treatment are meaningfully better than those who did not receive it.
  • Large effect (d = 0.8): The therapy produces a substantial improvement. The difference between treated and untreated groups is clear and clinically significant.

For context, most well-established therapies — like CBT for anxiety or EMDR for PTSD — show medium to large effect sizes. This puts psychotherapy in the same effectiveness range as many common medical treatments.

Cochrane Reviews: A Trusted Source

The Cochrane Collaboration is an international organization that produces what are widely considered the most rigorous systematic reviews in healthcare. Cochrane reviews follow strict methodological standards and are regularly updated as new research becomes available.

If a Cochrane review exists for the therapy or condition you are researching, it is one of the most reliable sources of information you will find. Their reviews cover many common therapy applications, including CBT for depression, trauma-focused therapy for PTSD, and psychological interventions for chronic pain.

Publication Bias and Why It Matters

Understanding the evidence hierarchy and how to read meta-analyses is important, but there is a systemic problem in scientific publishing that every informed consumer should know about: publication bias.

Studies with positive results — those showing that a therapy works — are significantly more likely to be published than studies with negative or inconclusive results. This creates a distorted picture of the evidence. If ten research teams test a new therapy and only the three that found positive results publish their findings, the published literature makes the therapy look more effective than it actually is.

This is sometimes called the "file drawer problem." Negative studies — the ones that did not find the expected effect — sit in researchers' file drawers, unpublished and unseen. The consequence is that meta-analyses and systematic reviews, which rely on published research, can inadvertently overstate a therapy's true effectiveness because they are working with an incomplete dataset.

The problem extends beyond publication bias. Psychology has faced a broader replication crisis in recent years. When independent researchers have attempted to replicate landmark findings in the field, a substantial number have failed to produce the same results. This does not mean all psychology research is unreliable, but it is a reminder that individual studies, even well-known ones, should be viewed as contributions to an evolving body of knowledge rather than final answers.

Efforts to address these problems are underway and making a real difference. Cochrane reviews, for example, actively search for unpublished data and assess the risk of publication bias in their analyses. Perhaps most importantly, the practice of pre-registration — where researchers publicly register their study design, hypotheses, and analysis plan before collecting any data — is becoming more common. Pre-registration prevents researchers from changing their methods or hypotheses after seeing the results, which is one of the key drivers of misleading findings.

When Formal Research Is Limited

The evidence hierarchy is a useful tool, but it has a significant blind spot: it works best for treatments and populations that have been heavily studied. Many important areas of mental health care lack the volume of research needed for systematic reviews or large-scale RCTs.

Why Gaps Exist

Clinical research is expensive. A single well-designed RCT can cost millions of dollars and take years to complete. As a result, research funding tends to flow toward the most common conditions (depression, anxiety, PTSD) and the most widely used therapies (CBT, in particular). This leaves substantial gaps:

  • Diverse populations. Most therapy RCTs have been conducted with predominantly white, middle-class, Western participants. Evidence for how well these same therapies work across different racial, ethnic, cultural, and socioeconomic groups is growing but still limited.
  • Rare conditions. Conditions that affect smaller numbers of people rarely attract the funding needed for large trials.
  • Newer approaches. Emerging therapies — such as psychedelic-assisted therapy — may show promising early results but lack the depth of evidence that established treatments have accumulated over decades.
  • Complex presentations. Many people seeking therapy have multiple co-occurring conditions (e.g., anxiety and substance use, or depression and chronic pain). Most RCTs study single conditions in isolation, which does not reflect the complexity of real clinical practice.

The Role of Clinical Experience and Observation

When RCTs are limited, other forms of evidence become especially important. Case studies, clinical observations, and the accumulated experience of skilled practitioners provide valuable information — particularly for populations and conditions that the research has not yet caught up with.

An experienced therapist who has worked with hundreds of clients dealing with a specific issue brings pattern recognition and clinical judgment that no study can fully capture. This does not replace research, but it complements it.

For a deeper look at how to evaluate treatments that are still building their research base, see our guide on how to evaluate emerging therapies.

Efficacy vs Effectiveness

When reading therapy research, you will sometimes encounter a distinction between efficacy and effectiveness. Efficacy refers to how well a therapy works under the controlled conditions of a clinical trial — standardized protocols, carefully selected participants, supervised therapists, and defined treatment lengths. Effectiveness refers to how well that same therapy works in the real world, where clients have multiple diagnoses, therapists have varying levels of training, sessions may be less frequent, and life circumstances complicate treatment.

RCTs often exclude people with substance use disorders, active suicidality, multiple co-occurring conditions, or unstable living situations. As a result, a therapy proven efficacious in a trial may be somewhat less effective in everyday clinical practice. This gap is normal and expected — it does not mean the therapy fails in the real world, but it does mean that real-world outcomes may be more modest than headline trial results suggest.

The APA Three-Circle Model

The American Psychological Association recognized this complexity in their definition of evidence-based practice. Rather than defining it as simply "using treatments that have RCT support," they describe it as the integration of three elements:

  1. Best available research evidence. The strongest research that exists, whatever form it takes for the specific question at hand.
  2. Clinical expertise. The therapist's training, experience, and professional judgment, including their ability to assess a client's unique situation and adapt treatment accordingly.
  3. Patient values and preferences. Your individual characteristics, cultural background, personal values, and preferences for how you want to be treated.

This model acknowledges that evidence-based practice is not a rigid prescription. It is a framework for making informed decisions that honor the complexity of real people seeking help for real problems. For a closer look at how these categories play out in practice, see our article on evidence-based vs evidence-informed therapy.

How to Read a Therapy Research Claim

You do not need a PhD to evaluate a research claim. Whether you are reading a news article, browsing a therapist's website, or hearing about a new treatment from a friend, five simple questions can help you assess the quality of the evidence.

These questions are not about becoming cynical or dismissive of therapy research. Most published therapy research is conducted in good faith by dedicated scientists. But applying these filters helps you distinguish strong evidence from preliminary findings, marketing claims, or well-intentioned but overstated conclusions.

Common Red Flags in Research Claims

Watch out for these warning signs when evaluating claims about therapy:

  • "Studies show" without citing specific studies. Vague references to research are often a sign that the evidence is weaker than implied.
  • Dramatic claims of cure or guaranteed results. Ethical therapy researchers are careful about their language. No responsible study claims to "cure" depression or "eliminate" anxiety entirely.
  • Cherry-picking. Citing one favorable study while ignoring several unfavorable ones is misleading. Look for meta-analyses that summarize the full body of evidence.
  • Confusing correlation with causation. "People who do yoga report less anxiety" does not mean yoga reduces anxiety. It might mean that less anxious people are more likely to do yoga.
  • Testimonials as evidence. Individual success stories are powerful but not scientific evidence. They represent one person's experience and cannot account for the many factors that influence outcomes.

For a more detailed breakdown of warning signs, see our article on how to spot pseudoscience in therapy.

Where to Find Reliable Information

When you want to look up the evidence behind a therapy approach, not all sources are equally trustworthy. Here are the most reliable places to start, along with what each one offers.

APA Division 12 (Society of Clinical Psychology)

The American Psychological Association's Division 12 maintains a list of psychological treatments with strong research support, organized by condition. Their website rates treatments as having "strong," "modest," or "controversial" evidence and provides plain-language summaries. This is one of the best starting points for consumers who want to know which therapies have the strongest evidence for a specific condition.

Cochrane Library

As mentioned earlier, Cochrane produces the most rigorous systematic reviews in healthcare. Their mental health and neuroscience section covers therapy interventions for depression, anxiety, PTSD, schizophrenia, and many other conditions. Every review includes a plain-language summary written for non-specialists.

SAMHSA Evidence-Based Practices Resource Center

The Substance Abuse and Mental Health Services Administration (SAMHSA) is a U.S. federal agency that maintains a resource center for evidence-based practices. Their database includes therapy programs, clinical guidelines, and prevention tools that have been evaluated for effectiveness. It is particularly useful for finding information about substance use treatment and community mental health programs.

NICE Guidelines

The National Institute for Health and Care Excellence (NICE) is a UK organization, but their clinical guidelines are respected internationally. NICE guidelines for mental health conditions provide detailed, evidence-based recommendations for treatment — including specific therapy approaches, recommended session numbers, and when medication should be considered alongside therapy. Their recommendations tend to be conservative and well-supported.

PubMed and Google Scholar

For those who want to go directly to the research, PubMed (maintained by the U.S. National Library of Medicine) and Google Scholar are the two largest databases of published research. Many studies are freely available, and most provide abstracts that summarize the key findings. If you find a study that looks relevant, the abstract will usually tell you the sample size, methodology, and main results.

If you are weighing established treatments against newer or alternative options, our article on traditional vs alternative therapy breaks down the key differences and how to evaluate each category.

What This Means for Choosing a Therapist

Understanding therapy research is not an academic exercise. It has practical implications for how you choose a therapist and how you participate in your own care.

Questions to Ask During a Consultation

When you are speaking with a potential therapist, you can use your research literacy to ask better questions:

  • "What approach do you plan to use, and what is the evidence behind it?" A good therapist should be able to explain their approach and reference the research supporting it — even if they use informal language.
  • "Have you had training specifically in this approach?" Evidence-based therapies require specific training. A therapist who says they "incorporate CBT techniques" is not the same as one who has completed a structured CBT training program.
  • "How do you measure progress?" Therapists who use standardized outcome measures (like the PHQ-9 for depression or GAD-7 for anxiety) are engaging in a form of evidence-based practice at the individual level — tracking data to inform treatment decisions.
  • "What will we do if this approach does not seem to be working?" This question reveals whether a therapist is flexible and willing to adjust based on results, which is a hallmark of evidence-based practice.

For a comprehensive list of consultation questions, see our guide on how to interview a therapist.

Balancing Research With Personal Fit

Evidence matters, but it is not the only thing that matters. The therapeutic relationship — the trust, rapport, and collaboration between you and your therapist — is consistently one of the strongest predictors of positive outcomes, regardless of the therapy model used.

The ideal scenario is finding a therapist who uses evidence-supported approaches and with whom you feel comfortable and understood. If you have to choose between a therapist who uses a textbook evidence-based approach but makes you feel uncomfortable, and one who uses a less-studied approach but creates a strong working relationship, the research actually favors the latter in many cases.

30%

of the variance in therapy outcomes is attributed to the quality of the therapeutic relationship — more than the specific technique used
Source: Norcross & Lambert, 2018

Using Evidence as a Starting Point, Not an Endpoint

Think of research evidence as the starting point for a conversation — not the final word. Use it to narrow your search, generate informed questions, and evaluate what you hear. But ultimately, the best therapy for you is the one that combines strong evidence, a skilled clinician, and a good personal fit.

If you are just beginning your search, our guide on how to find a therapist walks you through the full process step by step.

Frequently Asked Questions

Evidence-based therapy refers to treatment approaches that have been tested in controlled research studies and shown to be effective for specific conditions. The American Psychological Association defines evidence-based practice more broadly as the integration of the best available research, clinical expertise, and patient values. In practice, when a therapy is called 'evidence-based,' it usually means it has support from one or more randomized controlled trials. Learn more in our article on what is evidence-based therapy.

No. Randomized controlled trials are the strongest form of individual study evidence, but they are not the only valid form. Many effective therapeutic approaches have evidence from observational studies, case series, and clinical practice. The APA model recognizes that clinical expertise and patient preferences are also essential components of evidence-based practice. A therapy without RCT support is not automatically ineffective — it simply has not been tested in that specific way yet.

Ask them directly. A therapist using evidence-based methods should be able to name the approach they are using, explain its core principles, describe the research that supports it, and tell you how they measure progress. You can also cross-reference what they tell you with resources like APA Division 12 or NICE guidelines. If a therapist cannot articulate why they are using a particular approach, that is worth noting.

Social media can be a useful starting point for learning about therapy topics, but it should not be your primary source. Posts are often oversimplified, lack citations, and may reflect the poster's personal experience rather than scientific consensus. If something you see on social media interests you, look for the original study or a systematic review on the topic through PubMed, Cochrane, or APA Division 12 before drawing conclusions.

Several factors contribute. Research is expensive and time-consuming, so funding tends to concentrate on the most widely practiced approaches. Some therapies emerged from clinical traditions that historically valued case studies and theoretical writing over controlled trials. Others are newer and simply have not had time to accumulate a large evidence base. Limited research does not mean a therapy is ineffective — it means we have less certainty about its effectiveness compared to more heavily studied approaches.

Evidence-based typically means a treatment has direct research support from controlled studies demonstrating its effectiveness for specific conditions. Evidence-informed is a broader term meaning the practitioner draws on research findings to guide their clinical decisions, even if the specific intervention has not been tested in a formal trial. Both approaches value research, but evidence-based implies a stronger and more direct link between the treatment and supporting studies.

Put Your Knowledge Into Action

Understanding research helps you make better decisions about your mental health care. Use what you have learned here to ask informed questions and find a therapist who aligns with the evidence.

Find a Therapist