MENUCLOSE

 

Connect with us

Author: Brian S McGowan, PhD

What We Think We Know: How Overconfidence Derails Clinical Trials

In clinical research, where accuracy, coordination, and compliance are non-negotiable, the greatest threat isn’t always a lack of knowledge. Sometimes, it’s the mistaken belief that we already understand more than we do. 

in Applied Clinical Trials

Volume 34, Issue 3: 06-01-2025

This cognitive bias, known as the illusion of knowledge, occurs when individuals overestimate their grasp of complex concepts or systems. In practice, it means investigators may feel confident in a trial protocol they’ve barely reviewed, or teams might assume timelines are realistic without accounting for common delays. The illusion is subtle but pervasive, and it quietly undermines decision-making, planning, and learning across all phases of a clinical trial. 

Recognizing and mitigating this bias isn’t just an academic exercise—it’s essential to avoiding costly missteps and ensuring study success.

Drawing insights from recent Applied Clinical Trials articles, here are critical examples of how the illusion of knowledge manifests in clinical research, along with strategies to mitigate its impact:

1. Underestimating Training Needs During Trial Startup

Manifestation of the Illusion:
In our most recent article, The Compounding Power of Training, we highlighted a common misbelief: that traditional ‘check the box’ training suffices for trial success. 

This overconfidence leads sponsors to underinvest in comprehensive training, assuming that site teams possess adequate knowledge from the outset. Such assumptions can often result in protocol deviations, recruitment challenges, and data inconsistencies.

Mitigation Strategy:

  • Engage in Explanatory Thinking: Encourage site teams to articulate their understanding of protocols and procedures. This practice reveals knowledge gaps and fosters deeper comprehension.
  • Foster Intellectual Humility: Cultivate a training and learning culture where acknowledging uncertainties is valued, prompting continuous learning and inquiry.

2. Overconfidence in Project Timelines Due to the Planning Fallacy

Manifestation of the Illusion:
In our article, Optimism’s Hidden Costs, we explored the “planning fallacy,” where teams underestimate the time and resources required for successful trial start-up activities. This overoptimism stems from an illusion of control and understanding, leading to unrealistic timelines and reactive crisis management when challenges arise.

Mitigation Strategy:

  • Encourage Curiosity and Dialogue: Promote open discussions about potential obstacles and uncertainties during planning and feasibility phases. This approach enables teams to develop more realistic timelines and contingency plans.
  • Foster Intellectual Humility: Recognize and accept the inherent uncertainties in clinical trials, allowing for more adaptable and resilient planning.

3. Preference for Passive Learning Over Effective Training Methods

Manifestation of the Illusion:
In our article, Rethinking Training, we emphasized that clinical trial staff often favor passive learning methods, believing them to be effective. This preference is yet another manifestation of the illusion of knowledge, where ease of learning is mistaken for actual understanding, leading to poor retention and application of critical information.

Mitigation Strategy:

  • Engage in Explanatory Thinking: Implement training that requires active participation, such as problem-solving exercises, compelling learners to process and apply information.
  • Foster Intellectual Humility: Educate planners and learners about the benefits of “desirable difficulties”—challenging learning experiences that enhance retention—to shift preferences toward more effective training methods.

4. Overconfidence in Training Completion

Manifestation of the Illusion:

In our article, Changing Behavior: Knowing Doesn’t Equal Doing, trial leaders often assume that once training is delivered, comprehension and performance will naturally follow. This illusion persists even when there’s little to no evidence that study teams truly understand the protocol or can apply it correctly under real-world conditions. Without measuring actual learning, sponsors are flying blind—confusing training completion with trial readiness.

Mitigation Strategy:

  • Measure What Matters: Completion doesn’t equal comprehension. Use behavioral measures to reveal understanding, surface confusion, and flag underperformance. Track how well learners retain and apply protocol-critical concepts—not just whether they finished the training. The more precisely you measure, the better your decisions.
  • Foster Intellectual Humility: Recognize that even experienced teams can misunderstand or misapply complex protocols. Make it standard practice to question assumptions, invite clarification, and validate understanding with real data. When you build systems that prioritize insight over assumption, readiness becomes measurable and actionable.

Ultimately, each of the strategies used to mitigate the illusion of knowledge—whether fostering intellectual humility, encouraging explanatory thinking, or measuring what matters—relies on one foundational commitment: investing in meaningful, evidence-informed training. Not training as a checkbox, but as a deliberate, ongoing process that surfaces false confidence, strengthens true understanding, and prepares teams for the complexity of real-world trials. 

When organizations prioritize training that challenges assumptions, encourages curiosity, and builds cognitive resilience, they do more than educate and check the compliance box—they avoid costly delays and increase trial quality. Recognizing and addressing the illusion of knowledge isn’t just good practice; it’s a critical safeguard for trial success.

Brian S. McGowan, PhD, FACEHP, is Chief Learning Officer and Co-Founder, ArcheMedX, Inc.

Can We Predict Trial Success? From ‘Feasibility’ to Predictive ‘Readiness’

What learning science has taught us about the drivers and predictors of change—and applying those to clinical research practice.

in Applied Clinical Trials

Volume 33, Issue 11: 11-01-2024

So much has been written about site feasibility over the past decade—even a cursory review of Applied Clinical Trials magazine, for instance, will identify ~20 articles, press releases, and interviews describing site feasibility services, solutions, toolkits, and best practices. And this is just a small snapshot of the “research” and promotion of site feasibility that overwhelms our community. With all that has been written and presented, it seems logical to ask if these site feasibility efforts have provided meaningful benefits.

Perhaps not surprisingly, current site and trial performance data provide a striking answer:

  • 70% of trials experience start-up delays
  • 80% of trials fail to meet on-time enrollment
  • 45% of trials miss original projected timelines

If the goal of site feasibility is to “predict” if a site will be successful in conducting a study and the performance data suggests that sites continue to struggle, then maybe it’s time to rethink our principle approach to predicting performance. To be clear, we need to continue to refine and enhance the predictive validity of site feasibility, but there are other evidence-based predictive measures of change that should be immediately used by clinical research professionals to minimize start-up delays, accelerate enrollment, and optimize trial performance.

In each of my prior columns, I’ve drawn lessons directly from cognitive science or behavior science to suggest new ways of approaching clinical trial planning and execution. For this column, I summarize what learning science has taught us about drivers, or predictors, of change—and how we get from learning to doing.

From learning to doing: Evidence-based predictors of performance

To summarize merely 50 years of evidence: learning science has demonstrated six characteristics of a learner in a training experience that are highly predictive of application of learning (i.e., behavior change). The more these characteristics are surfaced during a training experience, the more likely performance will improve. In other words, we know definitively that how and what a learner thinks while learning is actually our most accurate predictor of change. So what are these predictive characteristics?

1. Confidence (Self-efficacy)

Learner confidence, or self-efficacy, reflects the belief in one’s ability to execute specific tasks or behaviors. Bandura’s social cognitive theory emphasizes self-efficacy as a central predictor of behavior change, as individuals are more likely to implement new practices when they believe they can succeed. Clinical trial professionals with accurately placed confidence tend to be more proactive and persistent in applying their skills, which leads to sustained improvements in trial execution.

2. Reflection

Reflection involves the process of evaluating experiences and recognizing areas for improvement. Schön’s work on reflective practice underscores that reflective learners tend to bridge the gap between knowledge acquisition and practical application, as they continually integrate new insights into their professional identity. Reflection within training strengthens a clinician’s ability to adapt and apply new practices effectively.

3. Curiosity

Curiosity drives individuals to explore, seek out new information, and remain engaged. Curiosity has been linked to greater persistence in learning and problem-solving. In training, curiosity encourages clinicians to go beyond basic knowledge acquisition, leading to deeper assimilation and broader application of new skills.

4. Grit (resilience)

Duckworth’s research on grit—defined as perseverance and passion for long-term goals—demonstrates its role in achieving sustained behavior change, even under challenging conditions. Clinical research professionals with high levels of grit are better equipped to navigate difficulties and persist in adopting new behaviors. Demonstrating grit within trial training can, therefore, help professionals remain committed to change despite obstacles.

5. Intention to change (commitment to change)

Ajzen demonstrated that an individual’s intention to change is one of the strongest predictors of actual behavior change. Site training programs that encourage participants to set specific goals or commitments can foster stronger intentions to implement what they have learned. This behavioral intention often translates into meaningful change when clinicians return to practice.

6. Self-regulation

Self-regulation, the capacity to monitor and manage one’s learning process, plays a critical role in behavior change. Zimmerman showed that self-monitoring, self awareness, and strategic adjustments enable learners to incorporate new knowledge effectively. Site staff who are skilled in self-regulation are better able to apply new techniques consistently and refine their skills over time.

Importantly, these are characteristics of how and what a person thinks as they learn. In prior columns we highlighted the differences in “I-Frame” (the individual) vs “S-Frame” (the system) approaches to change management. Here is another example of just why the “I-Frame” is so critical—execution of a trial protocol ultimately comes down to an individual screening a patient, an individual providing care, and an individual deciding to diligently follow the varied and complex steps in a modern clinical protocol. Therefore, it is the individual or team’s readiness to perform that predicts trial success. And confidence, reflection, curiosity, grit, intention, and self-regulation are the well-established predictive drivers of readiness to perform.

Moving to feasibility plus predictive readiness

Site feasibility, as an “S-Frame” intervention, has a critical place in planning and conducting clinical trials, but it is simply not a strong predictor of trial success. The success of a study is more than having adequate logistics, resources, or experience—that’s not how performance works. To maximize our ability to predict trial success, we must consider the actual predictive drivers of behavior change. By focusing on these six training-based predictors, we can design training programs that not only convey knowledge but also foster lasting change. Ultimately, the purpose of trial start-up and site training is to empower professionals to act, transforming insights into better trial enrollment and execution, and accelerating advancements in patient care.

Brian S. McGowan, PhD, FACEHP, is Chief Learning Officer and Co-Founder, ArcheMedX, Inc.

From Outcomes to Insights: 4 Best Practices in Outcomes Storytelling

Take-away: Since there is no common agreement on what an “optimal” outcomes report entails, providers and supporters must recognize and embrace the flexibility and effort that is required to meet outcomes reporting expectations that wildly vary. This newsletter presents four best practices that simplify this effort, potentially saving the community thousands of hours and millions of dollars annually.

As a result of our initial “Future of Outcomes” outreach in late July, we have had more than 40 discussions with supporters and providers. While many discussions began by simply reviewing the general take-aways from our White Paper, subsequently we have been asked to dig a bit deeper into reviewing confidence-based assessment outcomes, behavioral engagement data, and readiness to change measures from specific projects, clinical areas, and even complete provider programs. These practical discussions reveal two things: 1) the community of providers and supporters are collectively starving for more effective ways to tell their outcomes story, and 2) we are far from having common reporting expectations which, as a result, continues to create significant tension and frustration across the community.

The Varied Range of Outcomes Report

For nearly 12 years ArcheMedX has provided our partners with real-time, 24-hour access to up to six different, intentionally designed Ready outcomes dashboard. These dashboards include hundreds of data visualization and granular data tables each designed to simplify outcomes efforts. All data, analyses, and visualizations are aligned with the Outcomes Standardization Project, but they include much, much more. Between the visualizations and the granular data table there are few, if any, outcomes-related questions that can not be answered. And all dashboards are filterable by user Profession, Specialty, and date. There is literally nothing like it in CME/CPD.

But having access to this treasure trove of data is just the beginning. Once the data is generated and the dashboards are consumed its ‘Reporting” time…and this is where things can often seem paralyzing. Does the data need to be entered manually into a grants management system, does it need to be summarized into a single-slide template, does a narrative report need to meet a strict page limit before being uploaded, can it be presented in person, how about interpretive dance….🕺💃

Now rinse-and-repeat for every project, for every supporter, every milestone, month, or quarter. If you ask a dozen supporters how (and when) they ‘require’ outcomes reports you will get two dozen answers. Go ahead and try it, I did 😉

Best Practices in Outcomes Report

Over the past few years, we have worked hard with providers and supporters to find the most common ground. This work has led to the creation of outcomes reporting best practices which come directly from the providers who have had the most success and the supporters who are most satisfied. The best practices won’t be a miracle cure 100% of the time, but they’ll ensure a core reporting structure that can then be efficiently altered as needed.

#1 – Begin with the most complete data model and experience you can engineer. Obviously this is a strength for us at ArcheMedX, but the more robust your data model is, the more seeds you have from which to sow your outcomes story. Invest upfront in the model and you’ll reap the benefits many times over!

#2 – Ensure that your core outcomes methods, measures, and analyses are clearly articulated in your planning and proposals. For every educational intervention there are literally 1000s of questions that could be asked of the data after the fact – but good outcomes science happens within a predefined and structured framework. I spend as much time in data tables as anyone I know – trust me: you have to plan, focus, and ruthlessly prioritize. 

#3 – Create a holistic reporting framework that effectively communicates not only your outcomes data, but also your insights (what does it mean?). At this point your core outcomes methods, measures, and analyses should be your guide. Sure, lean into the Outcomes Conceptual Framework (Moore 2009), but also recognize its limitations. Your reporting framework should be principally driven by what you intended to measure and what unique methods and analyses you leveraged.

#4 – Remember the adage, “Contrast creates meaning.” The best insights never come from purely descriptive outcomes, but from how these outcomes compare or contrast to other datasets. Leveraging things like segmentation, effect size, or benchmarking data, routinely spark the most meaningful insights.

Here is a snapshot of how we recommend our partners bridge the chasm from data to storytelling – this is our Ready Reporting Template – our holistic reporting framework. It’s a nearly perfect balance of simplicity, rigor, and most importantly, storytelling. Eight-to-ten slides, each self-contained, with simple design, and a balance of data and narrative. (And for the visual design nerds, each slide averages nearly 80% white space!)


More specifically, as an example of just one element of the Ready Reporting Template, here is our new Readiness to Change ‘report’ slide:


Upper right: brief narrative explaining the methods and measures.
Upper left: comparison of aggregate Readiness to Change outcomes vs historic benchmarks.
Bottom left: comparison of Readiness to Change for the top three intended professions.
Bottom right: comparison of Readiness to Change for the top three intended specialties.
Notice that each of the three data visualizations is supported with the most relevant insight. 

——- | ——-

After 20+ years of pioneering approaches to outcomes science and reporting, both as a supporter and a provider, I’ve come to the conclusion that there is far too much complexity and variability for the community to ever truly standardize a reporting approach. We CAN standardize methods and measures, but NOT the way we tell our stories. My goal is that by sharing these outcomes reporting best practices and our example Ready Reporting Template, we collectively move closer and closer to a viable common ground.

As always, please let me know if you have any questions about the reporting best practices or the Ready reporting template. If you want to chat directly, click this calendar link to find an open time that fits your schedule.

Continue Reading

Rethinking Training: The Benefits of Embracing ‘Desirable Difficulties’

Four strategies for implementing this approach in clinical trial staff and site training.

Brian S. McGowan, PhD, FACEHP, Chief Learning Officer and Co-Founder, ArcheMedX, Inc.

 

If you were to ask me to summarize everything we have learned from the learning sciences over the past 40 years, and to identify the single most important lesson for the clinical research community to embrace, it would be this: passive, didactic, traditional learning experiences rarely lead to learning. Let this sink in—most of what we develop and deliver when training clinical trial staff and teams, has been proven largely ineffective.

Making matters even worse, in study after study after study, adult learners wholeheartedly believe that passive, didactic, traditional learning experiences are effective learning experiences. This is what they prefer, even after being shown the ineffective outcomes of the training. And when trainers build their training, and they evaluate their training outcomes through satisfaction surveys, they feel rightly justified in continuing to present the same formats that the evidence demonstrates don’t work. The impact of this disconnect can not be understated.

In this article, I’d like to introduce the science and the opportunities of leveraging “desirable difficulties” in training. Originally introduced by UCLA’s Robert A. Bjork more than 30 years ago,1 the concept of desirable difficulties suggests that learning experiences that are initially challenging significantly enhance long-term retention and mastery of skills, but trainees unknowingly perceive just the opposite being true…creating a chasm between what learners prefer, and what actually works (see Figure 1).

Desirable difficulties: A cognitive bias in learning

Desirable difficulties are learning experiences that require reflective effort, making them seem challenging in the short term yet beneficial for long-term learning and performance. Examples include varied practice conditions, spacing learning sessions over time, generating reflection through behaviorally designed learning moments, and embracing testing as a critical learning tool rather than merely as a means of measuring (judging) learners.

To understand the root disconnect of desirable difficulties we must acknowledge the work of Daniel Kahneman and Amos Tversky who demonstrated that we (adult humans) are routinely victims of cognitive biases and fast and slow thinking.2 Kahneman/Tversky highlight how our intuition (fast thinking) often leads us to prefer simple or familiar strategies. However, in learning, these comfortable strategies, unlike more effortful strategies, are far less effective.

In research more specific to clinician learning, David Davis’ exploration of clinician self-assessment highlighted a related challenge: clinicians overestimate their abilities and learning needs.3

The integration of desirable difficulties can also help correct these misjudgments by providing feedback that is more aligned with actual performance, thus fostering better self-awareness and more targeted learning.

Four strategies for implementing desirable difficulties in training

One of the main challenges with embracing desirable difficulties in training is the initial perception that these methods are less effective because they make learning feel harder.

This perception can discourage learners and educators alike. However, research is conclusive—while performance may initially seem more challenging, long-term retention and the ability to apply knowledge and skills are significantly improved.

To effectively implement desirable difficulties in clinical research-related training, consider the following strategies:

  1. Interleaved practice. Instead of focusing on one topic at a time, mix different subjects within a learning experience. This approach helps learners better discriminate between concepts and to apply knowledge appropriately in practice.4
  2. Spacing effect. Distribute learning experiences over time rather than combining them into a singular prolonged session. This strategy enhances memory consolidation and recall.5
  3. Testing as learning. Use frequent low-stakes testing not just to assess, but as a tool to strengthen memory and identify areas needing improvement. Leverage pre-tests to shape learning and utilize reflective poll questions throughout the training experiences.6
  4. Engineer reflective learning moments. Learners repeatedly default to low attention, low reflection states while learning. Creating a consistent rhythm of “nudges” that drive reflection can lead to four to six times greater learning and retention.7

Realizing the impact

If your goal is to effectively train and prepare staff and sites to successfully execute your drug development studies, incorporating desirable difficulties into clinical research-related training is not optional—it is how effective learning happens. More than 30 years of research evidence supports the effectiveness of desirable difficulties in enhancing professional knowledge, competence, and skill; and it is increasingly being embraced by our partners, including pharmaceutical company sponsors and contract research organizations, in their trial-related training.

By challenging ourselves to embrace these “difficulties,” we will find that things actually get a whole lot easier.

Brian S. McGowan, PhD, FACEHP, is Chief Learning Officer and Co-Founder at ArcheMedX, Inc.

References

1. Bjork, R.A. Memory and Metamemory Considerations in the Training of Human Beings. Metcalfe, J.; and Shimamura, A. (Eds.). 1994. Metacognition: Knowing about Knowing (pp. 185-205). https://drive.google.com/file/d/1QS48Q9Sg07k20uTPd3pjduswwHqTj9DD/view?pli=1

2. Kahneman, D. (Author); Egan, P. (Narrator). Thinking, Fast and Slow. Random House Audio. 2011. http://www.bit.ly/3zXJdZi

3. Davis, D.A.; Mazmanian, P.E.; Fordis, M.; et al. Accuracy of Physician Self-Assessment Compared With Observed Measures of Competence. JAMA. 2006. 296 (9), 1094-1102. https://drive.google.com/file/d/1cMBpFDUNVr74dNdfgFzdq7ENcfQVahcY/view

4. Van Hoof, T.J.; Sumeracki, M.A.; Madan, C.R. Science of Learning Strategy Series: Article 3, Interleaving. JCEHP. 2022. 42 (4), 265-268. https://drive.google.com/file/d/18Jd14COH0UfQ5c4TJcb-DCBrHKy5pLgy/view

5. Van Hoof, T.J.; Sumeracki, M.A.; Madan, C.R. Science of Learning Strategy Series: Article 1, Distributed Practice. JCEHP. 2021. 41 (1), 59-62. https://drive.google.com/file/d/18MZnQcaZd8Yfi_dRPkpYliuElONZcju3/view

6. Van Hoof, T.J.; Sumeracki, M.A.; Madan, C.R. Science of Learning Strategy Series: Article 2, Retrieval Practice. JCEHP. 2021. 41 (2), 119-123https://drive.google.com/file/d/18MVcNHOtL8Pfj1V7xYHgINkyl0dsfwC2/view

7. Alliance Almanac. The Alliance for Continuing Education in the Health Professions. December 2014. https://drive.google.com/file/d/1gTej_TiOLxSsZoB58MI6x-S7wPEOJJQA/view

Top 10 Lessons from the Pioneers of Behavioral Science in Healthcare

Earlier this month I had the honor of presenting our research at the 2023 Nudges In Healthcare Symposium and I can not recommend the experience strongly enough. Dating back to 2018 University of Pennsylvania’s Nudge Unit and Center for Health Incentives & Behavioral Economics have hosted clinician researchers from around the world, providing an opportunity to share their research, attend Keynote lectures from leading behavioral scientists, and engage in thought provoking workshops.

While it’s incredibly valuable to spend two days interacting with front line clinicians designing and implementing scalable improvement projects in healthcare through behavioral science, it’s truly transformational to explore these projects, their strengths, weaknesses, and opportunities at scale.

Perhaps the hardest part of writing this post is to reduce down all of the incredible learnings I took away from the conference, but let me try to summarize this experience with my Top 10 Lessons from the Pioneers of Behavioral Science in Healthcare.

  1. The community of scientists demonstrating the impact of behavioral science in healthcare is truly global – many of the best sessions and posters were from Israel, Ireland, and the Middle East. We are well beyond ‘early adoption’ and are likely at the tipping point of this science.
  2. From Elizabeth Linos, Associate Professor of Public Policy and Management at Harvard Kennedy School, “The strongest predictor of adoption of any nudge is whether it was designed as a wholly new process (less effective) or integrated into an existing process (more effective).” We need to infuse behavioral interventions into workflows versus creating new workflow, new behaviors, and new complexity.
  3. Also from Dr. Linos, “When your goal is to change policy and implement nudges, you need to include the policy changers in the room at the design of nudge experiment/pilot.” Like any other change management strategy, getting buy in is incredibly important – don’t lose the forest through the trees.
  4. From scientists at Clalit Health Services in Israel, nudge-based interventions reduced no show appointments by 33%, increased proactive cancellations by 17%, and freed up nearly 200,000 appointments per year. I think the impact we can have on clinical trials is likely to be far better than what has been seen in general healthcare.
  5. Reminded of an infamous quote by Max Tetlock, “If you don’t get feedback, your confidence grows at a much faster rate than your accuracy.” This applies to every behavioral science-based intervention we design – plan, do, study, adapt!
  6. From the team at Vanderbilt Biomedical Informatics, “It can often be just as important to nudge team members to STOP behaviors as it is to nudge them to act.” This might be one of the most lasting lessons for me.
  7. David Asch, Senior Vice Dean for Strategic Initiatives, Penn Perelman School of Medicine, introduced the Day Two keynote in the following way,  “Without Kevin Volpp there may not be nudge science and behavior economics in healthcare….but there definitely wouldn’t be a Penn Nudge Unit.” That is a strong statement, but wholly accurate. Kevin and his team are the true pioneers on behavioral science in healthcare.
  8. From Michelle Mayer, Associate Professor and Chair of the Department of Bioethics and Decision Sciences at Geisinger Health System, “At the heart ofa learning healthcare system is a commitment to experiment…stop assuming you know what is best…” We are at the beginning of a behavioral science revolution in healthcare and clinical research, and experimentation is critical to our success.
  9. Our data set of 600,000 learners and 25,000,000+ learning events modeled as behaviors was one of, if not the largest data set shared at the conference, and the reaction was overwhelming. This was a conference of behavioral scientists reacting to the novel application of behavioral science to enhance learning – as the designer of our behavioral model at ArcheMedX, this was a wonderful validation!
  10. More generally, research and evidence generated by the global community of behavioral scientists driven to improve healthcare quality, is critically relevant and applicable to our progress as a clinical research community. The lessons shared at the 2023 Penn Nudges in Healthcare Symposium presents us with a roadmap for transforming clinical trial effectiveness, if we choose to listen.

With all of this evidence, the challenge for each of us is to begin to apply these lessons to improve our trials. The usual hesitancy to implement change is how and where to start. Fortunately for our colleagues across the clinical research community, we are leading a virtual event in October that covers this very topic.

Please join us on Thursday October 26th at 1pm EST / 10am PST / 7pm CET as we lay the groundwork and provide real world examples of how our community – clinical research professionals from industry, CROs, and clinical research sites around the world – can benefit from leveraging behavioral science and nudges to achieve operational excellence in clinical trial execution.

For clinical trial leaders, the application of behavioral science offers a transformative approach. Whether it’s refining study team training, optimizing site selection, enhancing patient recruitment strategies, ensuring meticulous vendor oversight, or elevating site monitoring processes, behavioral science holds the key to unlocking operational excellence.

If we embrace the change management inherent in every clinical trial, we can learn how to turn operational challenges into opportunities with the power of behavioral science. In this webinar, discover how leveraging behavioral science can be a game-changer in addressing the unique challenges that each clinical trial presents. Learn how other clinical trial leaders like you changed critical behaviors that improved how their sites and teams conduct clinical trials.

Brian McGowan, PhD

Chief Learning Officer & Co-Founder

ArcheMedX