The Confidence Trap in Clinical Trials: When Knowing Just Enough Becomes Dangerous
When confidence outpaces competence in clinical research, the risks can be hidden…and costly for trial sponsors
When confidence outpaces competence in clinical research, the risks can be hidden…and costly for trial sponsors
In clinical research, where accuracy, coordination, and compliance are non-negotiable, the greatest threat isn’t always a lack of knowledge. Sometimes, it’s the mistaken belief that we already understand more than we do.
Volume 34, Issue 3: 06-01-2025
This cognitive bias, known as the illusion of knowledge, occurs when individuals overestimate their grasp of complex concepts or systems. In practice, it means investigators may feel confident in a trial protocol they’ve barely reviewed, or teams might assume timelines are realistic without accounting for common delays. The illusion is subtle but pervasive, and it quietly undermines decision-making, planning, and learning across all phases of a clinical trial.
Recognizing and mitigating this bias isn’t just an academic exercise—it’s essential to avoiding costly missteps and ensuring study success.
Drawing insights from recent Applied Clinical Trials articles, here are critical examples of how the illusion of knowledge manifests in clinical research, along with strategies to mitigate its impact:
Manifestation of the Illusion:
In our most recent article, The Compounding Power of Training, we highlighted a common misbelief: that traditional ‘check the box’ training suffices for trial success.
This overconfidence leads sponsors to underinvest in comprehensive training, assuming that site teams possess adequate knowledge from the outset. Such assumptions can often result in protocol deviations, recruitment challenges, and data inconsistencies.
Mitigation Strategy:
Manifestation of the Illusion:
In our article, Optimism’s Hidden Costs, we explored the “planning fallacy,” where teams underestimate the time and resources required for successful trial start-up activities. This overoptimism stems from an illusion of control and understanding, leading to unrealistic timelines and reactive crisis management when challenges arise.
Mitigation Strategy:
Manifestation of the Illusion:
In our article, Rethinking Training, we emphasized that clinical trial staff often favor passive learning methods, believing them to be effective. This preference is yet another manifestation of the illusion of knowledge, where ease of learning is mistaken for actual understanding, leading to poor retention and application of critical information.
Mitigation Strategy:
Manifestation of the Illusion:
In our article, Changing Behavior: Knowing Doesn’t Equal Doing, trial leaders often assume that once training is delivered, comprehension and performance will naturally follow. This illusion persists even when there’s little to no evidence that study teams truly understand the protocol or can apply it correctly under real-world conditions. Without measuring actual learning, sponsors are flying blind—confusing training completion with trial readiness.
Mitigation Strategy:
Ultimately, each of the strategies used to mitigate the illusion of knowledge—whether fostering intellectual humility, encouraging explanatory thinking, or measuring what matters—relies on one foundational commitment: investing in meaningful, evidence-informed training. Not training as a checkbox, but as a deliberate, ongoing process that surfaces false confidence, strengthens true understanding, and prepares teams for the complexity of real-world trials.
When organizations prioritize training that challenges assumptions, encourages curiosity, and builds cognitive resilience, they do more than educate and check the compliance box—they avoid costly delays and increase trial quality. Recognizing and addressing the illusion of knowledge isn’t just good practice; it’s a critical safeguard for trial success.
Brian S. McGowan, PhD, FACEHP, is Chief Learning Officer and Co-Founder, ArcheMedX, Inc.
Clinical trials professionals face a host of challenges: for some, it’s the complexity in protocol design, for many it’s lagging enrollment rates, and for others it’s the site burden that’s inherent to change management.
I was recently invited to deliver a keynote presentation on embracing change. The audience was more than a 100 healthcare improvement and clinical trial professionals who had each committed to participate in a year-long collaborative.
What learning science has taught us about the drivers and predictors of change—and applying those to clinical research practice.
Volume 33, Issue 11: 11-01-2024
So much has been written about site feasibility over the past decade—even a cursory review of Applied Clinical Trials magazine, for instance, will identify ~20 articles, press releases, and interviews describing site feasibility services, solutions, toolkits, and best practices. And this is just a small snapshot of the “research” and promotion of site feasibility that overwhelms our community. With all that has been written and presented, it seems logical to ask if these site feasibility efforts have provided meaningful benefits.
Perhaps not surprisingly, current site and trial performance data provide a striking answer:
If the goal of site feasibility is to “predict” if a site will be successful in conducting a study and the performance data suggests that sites continue to struggle, then maybe it’s time to rethink our principle approach to predicting performance. To be clear, we need to continue to refine and enhance the predictive validity of site feasibility, but there are other evidence-based predictive measures of change that should be immediately used by clinical research professionals to minimize start-up delays, accelerate enrollment, and optimize trial performance.
In each of my prior columns, I’ve drawn lessons directly from cognitive science or behavior science to suggest new ways of approaching clinical trial planning and execution. For this column, I summarize what learning science has taught us about drivers, or predictors, of change—and how we get from learning to doing.
To summarize merely 50 years of evidence: learning science has demonstrated six characteristics of a learner in a training experience that are highly predictive of application of learning (i.e., behavior change). The more these characteristics are surfaced during a training experience, the more likely performance will improve. In other words, we know definitively that how and what a learner thinks while learning is actually our most accurate predictor of change. So what are these predictive characteristics?
1. Confidence (Self-efficacy)
Learner confidence, or self-efficacy, reflects the belief in one’s ability to execute specific tasks or behaviors. Bandura’s social cognitive theory emphasizes self-efficacy as a central predictor of behavior change, as individuals are more likely to implement new practices when they believe they can succeed. Clinical trial professionals with accurately placed confidence tend to be more proactive and persistent in applying their skills, which leads to sustained improvements in trial execution.
2. Reflection
Reflection involves the process of evaluating experiences and recognizing areas for improvement. Schön’s work on reflective practice underscores that reflective learners tend to bridge the gap between knowledge acquisition and practical application, as they continually integrate new insights into their professional identity. Reflection within training strengthens a clinician’s ability to adapt and apply new practices effectively.
3. Curiosity
Curiosity drives individuals to explore, seek out new information, and remain engaged. Curiosity has been linked to greater persistence in learning and problem-solving. In training, curiosity encourages clinicians to go beyond basic knowledge acquisition, leading to deeper assimilation and broader application of new skills.
4. Grit (resilience)
Duckworth’s research on grit—defined as perseverance and passion for long-term goals—demonstrates its role in achieving sustained behavior change, even under challenging conditions. Clinical research professionals with high levels of grit are better equipped to navigate difficulties and persist in adopting new behaviors. Demonstrating grit within trial training can, therefore, help professionals remain committed to change despite obstacles.
5. Intention to change (commitment to change)
Ajzen demonstrated that an individual’s intention to change is one of the strongest predictors of actual behavior change. Site training programs that encourage participants to set specific goals or commitments can foster stronger intentions to implement what they have learned. This behavioral intention often translates into meaningful change when clinicians return to practice.
6. Self-regulation
Self-regulation, the capacity to monitor and manage one’s learning process, plays a critical role in behavior change. Zimmerman showed that self-monitoring, self awareness, and strategic adjustments enable learners to incorporate new knowledge effectively. Site staff who are skilled in self-regulation are better able to apply new techniques consistently and refine their skills over time.
Importantly, these are characteristics of how and what a person thinks as they learn. In prior columns we highlighted the differences in “I-Frame” (the individual) vs “S-Frame” (the system) approaches to change management. Here is another example of just why the “I-Frame” is so critical—execution of a trial protocol ultimately comes down to an individual screening a patient, an individual providing care, and an individual deciding to diligently follow the varied and complex steps in a modern clinical protocol. Therefore, it is the individual or team’s readiness to perform that predicts trial success. And confidence, reflection, curiosity, grit, intention, and self-regulation are the well-established predictive drivers of readiness to perform.
Site feasibility, as an “S-Frame” intervention, has a critical place in planning and conducting clinical trials, but it is simply not a strong predictor of trial success. The success of a study is more than having adequate logistics, resources, or experience—that’s not how performance works. To maximize our ability to predict trial success, we must consider the actual predictive drivers of behavior change. By focusing on these six training-based predictors, we can design training programs that not only convey knowledge but also foster lasting change. Ultimately, the purpose of trial start-up and site training is to empower professionals to act, transforming insights into better trial enrollment and execution, and accelerating advancements in patient care.
Brian S. McGowan, PhD, FACEHP, is Chief Learning Officer and Co-Founder, ArcheMedX, Inc.
Take-away: Since there is no common agreement on what an “optimal” outcomes report entails, providers and supporters must recognize and embrace the flexibility and effort that is required to meet outcomes reporting expectations that wildly vary. This newsletter presents four best practices that simplify this effort, potentially saving the community thousands of hours and millions of dollars annually.
As a result of our initial “Future of Outcomes” outreach in late July, we have had more than 40 discussions with supporters and providers. While many discussions began by simply reviewing the general take-aways from our White Paper, subsequently we have been asked to dig a bit deeper into reviewing confidence-based assessment outcomes, behavioral engagement data, and readiness to change measures from specific projects, clinical areas, and even complete provider programs. These practical discussions reveal two things: 1) the community of providers and supporters are collectively starving for more effective ways to tell their outcomes story, and 2) we are far from having common reporting expectations which, as a result, continues to create significant tension and frustration across the community.
For nearly 12 years ArcheMedX has provided our partners with real-time, 24-hour access to up to six different, intentionally designed Ready outcomes dashboard. These dashboards include hundreds of data visualization and granular data tables each designed to simplify outcomes efforts. All data, analyses, and visualizations are aligned with the Outcomes Standardization Project, but they include much, much more. Between the visualizations and the granular data table there are few, if any, outcomes-related questions that can not be answered. And all dashboards are filterable by user Profession, Specialty, and date. There is literally nothing like it in CME/CPD.
But having access to this treasure trove of data is just the beginning. Once the data is generated and the dashboards are consumed its ‘Reporting” time…and this is where things can often seem paralyzing. Does the data need to be entered manually into a grants management system, does it need to be summarized into a single-slide template, does a narrative report need to meet a strict page limit before being uploaded, can it be presented in person, how about interpretive dance….🕺💃!
Now rinse-and-repeat for every project, for every supporter, every milestone, month, or quarter. If you ask a dozen supporters how (and when) they ‘require’ outcomes reports you will get two dozen answers. Go ahead and try it, I did 😉
Over the past few years, we have worked hard with providers and supporters to find the most common ground. This work has led to the creation of outcomes reporting best practices which come directly from the providers who have had the most success and the supporters who are most satisfied. The best practices won’t be a miracle cure 100% of the time, but they’ll ensure a core reporting structure that can then be efficiently altered as needed.
#1 – Begin with the most complete data model and experience you can engineer. Obviously this is a strength for us at ArcheMedX, but the more robust your data model is, the more seeds you have from which to sow your outcomes story. Invest upfront in the model and you’ll reap the benefits many times over!
#2 – Ensure that your core outcomes methods, measures, and analyses are clearly articulated in your planning and proposals. For every educational intervention there are literally 1000s of questions that could be asked of the data after the fact – but good outcomes science happens within a predefined and structured framework. I spend as much time in data tables as anyone I know – trust me: you have to plan, focus, and ruthlessly prioritize.
#3 – Create a holistic reporting framework that effectively communicates not only your outcomes data, but also your insights (what does it mean?). At this point your core outcomes methods, measures, and analyses should be your guide. Sure, lean into the Outcomes Conceptual Framework (Moore 2009), but also recognize its limitations. Your reporting framework should be principally driven by what you intended to measure and what unique methods and analyses you leveraged.
#4 – Remember the adage, “Contrast creates meaning.” The best insights never come from purely descriptive outcomes, but from how these outcomes compare or contrast to other datasets. Leveraging things like segmentation, effect size, or benchmarking data, routinely spark the most meaningful insights.
Here is a snapshot of how we recommend our partners bridge the chasm from data to storytelling – this is our Ready Reporting Template – our holistic reporting framework. It’s a nearly perfect balance of simplicity, rigor, and most importantly, storytelling. Eight-to-ten slides, each self-contained, with simple design, and a balance of data and narrative. (And for the visual design nerds, each slide averages nearly 80% white space!)

More specifically, as an example of just one element of the Ready Reporting Template, here is our new Readiness to Change ‘report’ slide:
Upper right: brief narrative explaining the methods and measures.
Upper left: comparison of aggregate Readiness to Change outcomes vs historic benchmarks.
Bottom left: comparison of Readiness to Change for the top three intended professions.
Bottom right: comparison of Readiness to Change for the top three intended specialties.
Notice that each of the three data visualizations is supported with the most relevant insight.
——- | ——-
After 20+ years of pioneering approaches to outcomes science and reporting, both as a supporter and a provider, I’ve come to the conclusion that there is far too much complexity and variability for the community to ever truly standardize a reporting approach. We CAN standardize methods and measures, but NOT the way we tell our stories. My goal is that by sharing these outcomes reporting best practices and our example Ready Reporting Template, we collectively move closer and closer to a viable common ground.
As always, please let me know if you have any questions about the reporting best practices or the Ready reporting template. If you want to chat directly, click this calendar link to find an open time that fits your schedule.
Trial sponsors don’t miss timelines because they’re careless. They miss them because they’re human. Behavioral science calls it the planning fallacy.
We talk a lot about processes and platforms in clinical trials. But in the end, success comes down to understanding and supporting the needs of the people conducting the trial.
We all like to believe trial decisions are purely rational. But behavioral science says otherwise. Trial teams rely on mental shortcuts, or heuristics, to make complex decisions faster. These shortcuts can be helpful in the moment, but they also introduce risk.