MENUCLOSE

 

Connect with us

Author: Brian S McGowan, PhD

Rethinking Training: The Benefits of Embracing ‘Desirable Difficulties’

Four strategies for implementing this approach in clinical trial staff and site training.

Brian S. McGowan, PhD, FACEHP, Chief Learning Officer and Co-Founder, ArcheMedX, Inc.

 

If you were to ask me to summarize everything we have learned from the learning sciences over the past 40 years, and to identify the single most important lesson for the clinical research community to embrace, it would be this: passive, didactic, traditional learning experiences rarely lead to learning. Let this sink in—most of what we develop and deliver when training clinical trial staff and teams, has been proven largely ineffective.

Making matters even worse, in study after study after study, adult learners wholeheartedly believe that passive, didactic, traditional learning experiences are effective learning experiences. This is what they prefer, even after being shown the ineffective outcomes of the training. And when trainers build their training, and they evaluate their training outcomes through satisfaction surveys, they feel rightly justified in continuing to present the same formats that the evidence demonstrates don’t work. The impact of this disconnect can not be understated.

In this article, I’d like to introduce the science and the opportunities of leveraging “desirable difficulties” in training. Originally introduced by UCLA’s Robert A. Bjork more than 30 years ago,1 the concept of desirable difficulties suggests that learning experiences that are initially challenging significantly enhance long-term retention and mastery of skills, but trainees unknowingly perceive just the opposite being true…creating a chasm between what learners prefer, and what actually works (see Figure 1).

Desirable difficulties: A cognitive bias in learning

Desirable difficulties are learning experiences that require reflective effort, making them seem challenging in the short term yet beneficial for long-term learning and performance. Examples include varied practice conditions, spacing learning sessions over time, generating reflection through behaviorally designed learning moments, and embracing testing as a critical learning tool rather than merely as a means of measuring (judging) learners.

To understand the root disconnect of desirable difficulties we must acknowledge the work of Daniel Kahneman and Amos Tversky who demonstrated that we (adult humans) are routinely victims of cognitive biases and fast and slow thinking.2 Kahneman/Tversky highlight how our intuition (fast thinking) often leads us to prefer simple or familiar strategies. However, in learning, these comfortable strategies, unlike more effortful strategies, are far less effective.

In research more specific to clinician learning, David Davis’ exploration of clinician self-assessment highlighted a related challenge: clinicians overestimate their abilities and learning needs.3

The integration of desirable difficulties can also help correct these misjudgments by providing feedback that is more aligned with actual performance, thus fostering better self-awareness and more targeted learning.

Four strategies for implementing desirable difficulties in training

One of the main challenges with embracing desirable difficulties in training is the initial perception that these methods are less effective because they make learning feel harder.

This perception can discourage learners and educators alike. However, research is conclusive—while performance may initially seem more challenging, long-term retention and the ability to apply knowledge and skills are significantly improved.

To effectively implement desirable difficulties in clinical research-related training, consider the following strategies:

  1. Interleaved practice. Instead of focusing on one topic at a time, mix different subjects within a learning experience. This approach helps learners better discriminate between concepts and to apply knowledge appropriately in practice.4
  2. Spacing effect. Distribute learning experiences over time rather than combining them into a singular prolonged session. This strategy enhances memory consolidation and recall.5
  3. Testing as learning. Use frequent low-stakes testing not just to assess, but as a tool to strengthen memory and identify areas needing improvement. Leverage pre-tests to shape learning and utilize reflective poll questions throughout the training experiences.6
  4. Engineer reflective learning moments. Learners repeatedly default to low attention, low reflection states while learning. Creating a consistent rhythm of “nudges” that drive reflection can lead to four to six times greater learning and retention.7

Realizing the impact

If your goal is to effectively train and prepare staff and sites to successfully execute your drug development studies, incorporating desirable difficulties into clinical research-related training is not optional—it is how effective learning happens. More than 30 years of research evidence supports the effectiveness of desirable difficulties in enhancing professional knowledge, competence, and skill; and it is increasingly being embraced by our partners, including pharmaceutical company sponsors and contract research organizations, in their trial-related training.

By challenging ourselves to embrace these “difficulties,” we will find that things actually get a whole lot easier.

Brian S. McGowan, PhD, FACEHP, is Chief Learning Officer and Co-Founder at ArcheMedX, Inc.

References

1. Bjork, R.A. Memory and Metamemory Considerations in the Training of Human Beings. Metcalfe, J.; and Shimamura, A. (Eds.). 1994. Metacognition: Knowing about Knowing (pp. 185-205). https://drive.google.com/file/d/1QS48Q9Sg07k20uTPd3pjduswwHqTj9DD/view?pli=1

2. Kahneman, D. (Author); Egan, P. (Narrator). Thinking, Fast and Slow. Random House Audio. 2011. http://www.bit.ly/3zXJdZi

3. Davis, D.A.; Mazmanian, P.E.; Fordis, M.; et al. Accuracy of Physician Self-Assessment Compared With Observed Measures of Competence. JAMA. 2006. 296 (9), 1094-1102. https://drive.google.com/file/d/1cMBpFDUNVr74dNdfgFzdq7ENcfQVahcY/view

4. Van Hoof, T.J.; Sumeracki, M.A.; Madan, C.R. Science of Learning Strategy Series: Article 3, Interleaving. JCEHP. 2022. 42 (4), 265-268. https://drive.google.com/file/d/18Jd14COH0UfQ5c4TJcb-DCBrHKy5pLgy/view

5. Van Hoof, T.J.; Sumeracki, M.A.; Madan, C.R. Science of Learning Strategy Series: Article 1, Distributed Practice. JCEHP. 2021. 41 (1), 59-62. https://drive.google.com/file/d/18MZnQcaZd8Yfi_dRPkpYliuElONZcju3/view

6. Van Hoof, T.J.; Sumeracki, M.A.; Madan, C.R. Science of Learning Strategy Series: Article 2, Retrieval Practice. JCEHP. 2021. 41 (2), 119-123https://drive.google.com/file/d/18MVcNHOtL8Pfj1V7xYHgINkyl0dsfwC2/view

7. Alliance Almanac. The Alliance for Continuing Education in the Health Professions. December 2014. https://drive.google.com/file/d/1gTej_TiOLxSsZoB58MI6x-S7wPEOJJQA/view

Top 10 Lessons from the Pioneers of Behavioral Science in Healthcare

Earlier this month I had the honor of presenting our research at the 2023 Nudges In Healthcare Symposium and I can not recommend the experience strongly enough. Dating back to 2018 University of Pennsylvania’s Nudge Unit and Center for Health Incentives & Behavioral Economics have hosted clinician researchers from around the world, providing an opportunity to share their research, attend Keynote lectures from leading behavioral scientists, and engage in thought provoking workshops.

While it’s incredibly valuable to spend two days interacting with front line clinicians designing and implementing scalable improvement projects in healthcare through behavioral science, it’s truly transformational to explore these projects, their strengths, weaknesses, and opportunities at scale.

Perhaps the hardest part of writing this post is to reduce down all of the incredible learnings I took away from the conference, but let me try to summarize this experience with my Top 10 Lessons from the Pioneers of Behavioral Science in Healthcare.

  1. The community of scientists demonstrating the impact of behavioral science in healthcare is truly global – many of the best sessions and posters were from Israel, Ireland, and the Middle East. We are well beyond ‘early adoption’ and are likely at the tipping point of this science.
  2. From Elizabeth Linos, Associate Professor of Public Policy and Management at Harvard Kennedy School, “The strongest predictor of adoption of any nudge is whether it was designed as a wholly new process (less effective) or integrated into an existing process (more effective).” We need to infuse behavioral interventions into workflows versus creating new workflow, new behaviors, and new complexity.
  3. Also from Dr. Linos, “When your goal is to change policy and implement nudges, you need to include the policy changers in the room at the design of nudge experiment/pilot.” Like any other change management strategy, getting buy in is incredibly important – don’t lose the forest through the trees.
  4. From scientists at Clalit Health Services in Israel, nudge-based interventions reduced no show appointments by 33%, increased proactive cancellations by 17%, and freed up nearly 200,000 appointments per year. I think the impact we can have on clinical trials is likely to be far better than what has been seen in general healthcare.
  5. Reminded of an infamous quote by Max Tetlock, “If you don’t get feedback, your confidence grows at a much faster rate than your accuracy.” This applies to every behavioral science-based intervention we design – plan, do, study, adapt!
  6. From the team at Vanderbilt Biomedical Informatics, “It can often be just as important to nudge team members to STOP behaviors as it is to nudge them to act.” This might be one of the most lasting lessons for me.
  7. David Asch, Senior Vice Dean for Strategic Initiatives, Penn Perelman School of Medicine, introduced the Day Two keynote in the following way,  “Without Kevin Volpp there may not be nudge science and behavior economics in healthcare….but there definitely wouldn’t be a Penn Nudge Unit.” That is a strong statement, but wholly accurate. Kevin and his team are the true pioneers on behavioral science in healthcare.
  8. From Michelle Mayer, Associate Professor and Chair of the Department of Bioethics and Decision Sciences at Geisinger Health System, “At the heart ofa learning healthcare system is a commitment to experiment…stop assuming you know what is best…” We are at the beginning of a behavioral science revolution in healthcare and clinical research, and experimentation is critical to our success.
  9. Our data set of 600,000 learners and 25,000,000+ learning events modeled as behaviors was one of, if not the largest data set shared at the conference, and the reaction was overwhelming. This was a conference of behavioral scientists reacting to the novel application of behavioral science to enhance learning – as the designer of our behavioral model at ArcheMedX, this was a wonderful validation!
  10. More generally, research and evidence generated by the global community of behavioral scientists driven to improve healthcare quality, is critically relevant and applicable to our progress as a clinical research community. The lessons shared at the 2023 Penn Nudges in Healthcare Symposium presents us with a roadmap for transforming clinical trial effectiveness, if we choose to listen.

With all of this evidence, the challenge for each of us is to begin to apply these lessons to improve our trials. The usual hesitancy to implement change is how and where to start. Fortunately for our colleagues across the clinical research community, we are leading a virtual event in October that covers this very topic.

Please join us on Thursday October 26th at 1pm EST / 10am PST / 7pm CET as we lay the groundwork and provide real world examples of how our community – clinical research professionals from industry, CROs, and clinical research sites around the world – can benefit from leveraging behavioral science and nudges to achieve operational excellence in clinical trial execution.

For clinical trial leaders, the application of behavioral science offers a transformative approach. Whether it’s refining study team training, optimizing site selection, enhancing patient recruitment strategies, ensuring meticulous vendor oversight, or elevating site monitoring processes, behavioral science holds the key to unlocking operational excellence.

If we embrace the change management inherent in every clinical trial, we can learn how to turn operational challenges into opportunities with the power of behavioral science. In this webinar, discover how leveraging behavioral science can be a game-changer in addressing the unique challenges that each clinical trial presents. Learn how other clinical trial leaders like you changed critical behaviors that improved how their sites and teams conduct clinical trials.

Brian McGowan, PhD

Chief Learning Officer & Co-Founder

ArcheMedX