MENUCLOSE

 

Connect with us

Author: Brian S McGowan, PhD

From Outcomes to Insights: 4 Best Practices in Outcomes Storytelling

Take-away: Since there is no common agreement on what an “optimal” outcomes report entails, providers and supporters must recognize and embrace the flexibility and effort that is required to meet outcomes reporting expectations that wildly vary. This newsletter presents four best practices that simplify this effort, potentially saving the community thousands of hours and millions of dollars annually.

As a result of our initial “Future of Outcomes” outreach in late July, we have had more than 40 discussions with supporters and providers. While many discussions began by simply reviewing the general take-aways from our White Paper, subsequently we have been asked to dig a bit deeper into reviewing confidence-based assessment outcomes, behavioral engagement data, and readiness to change measures from specific projects, clinical areas, and even complete provider programs. These practical discussions reveal two things: 1) the community of providers and supporters are collectively starving for more effective ways to tell their outcomes story, and 2) we are far from having common reporting expectations which, as a result, continues to create significant tension and frustration across the community.

The Varied Range of Outcomes Report

For nearly 12 years ArcheMedX has provided our partners with real-time, 24-hour access to up to six different, intentionally designed Ready outcomes dashboard. These dashboards include hundreds of data visualization and granular data tables each designed to simplify outcomes efforts. All data, analyses, and visualizations are aligned with the Outcomes Standardization Project, but they include much, much more. Between the visualizations and the granular data table there are few, if any, outcomes-related questions that can not be answered. And all dashboards are filterable by user Profession, Specialty, and date. There is literally nothing like it in CME/CPD.

But having access to this treasure trove of data is just the beginning. Once the data is generated and the dashboards are consumed its ‘Reporting” time…and this is where things can often seem paralyzing. Does the data need to be entered manually into a grants management system, does it need to be summarized into a single-slide template, does a narrative report need to meet a strict page limit before being uploaded, can it be presented in person, how about interpretive dance….🕺💃

Now rinse-and-repeat for every project, for every supporter, every milestone, month, or quarter. If you ask a dozen supporters how (and when) they ‘require’ outcomes reports you will get two dozen answers. Go ahead and try it, I did 😉

Best Practices in Outcomes Report

Over the past few years, we have worked hard with providers and supporters to find the most common ground. This work has led to the creation of outcomes reporting best practices which come directly from the providers who have had the most success and the supporters who are most satisfied. The best practices won’t be a miracle cure 100% of the time, but they’ll ensure a core reporting structure that can then be efficiently altered as needed.

#1 – Begin with the most complete data model and experience you can engineer. Obviously this is a strength for us at ArcheMedX, but the more robust your data model is, the more seeds you have from which to sow your outcomes story. Invest upfront in the model and you’ll reap the benefits many times over!

#2 – Ensure that your core outcomes methods, measures, and analyses are clearly articulated in your planning and proposals. For every educational intervention there are literally 1000s of questions that could be asked of the data after the fact – but good outcomes science happens within a predefined and structured framework. I spend as much time in data tables as anyone I know – trust me: you have to plan, focus, and ruthlessly prioritize. 

#3 – Create a holistic reporting framework that effectively communicates not only your outcomes data, but also your insights (what does it mean?). At this point your core outcomes methods, measures, and analyses should be your guide. Sure, lean into the Outcomes Conceptual Framework (Moore 2009), but also recognize its limitations. Your reporting framework should be principally driven by what you intended to measure and what unique methods and analyses you leveraged.

#4 – Remember the adage, “Contrast creates meaning.” The best insights never come from purely descriptive outcomes, but from how these outcomes compare or contrast to other datasets. Leveraging things like segmentation, effect size, or benchmarking data, routinely spark the most meaningful insights.

Here is a snapshot of how we recommend our partners bridge the chasm from data to storytelling – this is our Ready Reporting Template – our holistic reporting framework. It’s a nearly perfect balance of simplicity, rigor, and most importantly, storytelling. Eight-to-ten slides, each self-contained, with simple design, and a balance of data and narrative. (And for the visual design nerds, each slide averages nearly 80% white space!)


More specifically, as an example of just one element of the Ready Reporting Template, here is our new Readiness to Change ‘report’ slide:


Upper right: brief narrative explaining the methods and measures.
Upper left: comparison of aggregate Readiness to Change outcomes vs historic benchmarks.
Bottom left: comparison of Readiness to Change for the top three intended professions.
Bottom right: comparison of Readiness to Change for the top three intended specialties.
Notice that each of the three data visualizations is supported with the most relevant insight. 

——- | ——-

After 20+ years of pioneering approaches to outcomes science and reporting, both as a supporter and a provider, I’ve come to the conclusion that there is far too much complexity and variability for the community to ever truly standardize a reporting approach. We CAN standardize methods and measures, but NOT the way we tell our stories. My goal is that by sharing these outcomes reporting best practices and our example Ready Reporting Template, we collectively move closer and closer to a viable common ground.

As always, please let me know if you have any questions about the reporting best practices or the Ready reporting template. If you want to chat directly, click this calendar link to find an open time that fits your schedule.

Continue Reading

Rethinking Training: The Benefits of Embracing ‘Desirable Difficulties’

Four strategies for implementing this approach in clinical trial staff and site training.

Brian S. McGowan, PhD, FACEHP, Chief Learning Officer and Co-Founder, ArcheMedX, Inc.

 

If you were to ask me to summarize everything we have learned from the learning sciences over the past 40 years, and to identify the single most important lesson for the clinical research community to embrace, it would be this: passive, didactic, traditional learning experiences rarely lead to learning. Let this sink in—most of what we develop and deliver when training clinical trial staff and teams, has been proven largely ineffective.

Making matters even worse, in study after study after study, adult learners wholeheartedly believe that passive, didactic, traditional learning experiences are effective learning experiences. This is what they prefer, even after being shown the ineffective outcomes of the training. And when trainers build their training, and they evaluate their training outcomes through satisfaction surveys, they feel rightly justified in continuing to present the same formats that the evidence demonstrates don’t work. The impact of this disconnect can not be understated.

In this article, I’d like to introduce the science and the opportunities of leveraging “desirable difficulties” in training. Originally introduced by UCLA’s Robert A. Bjork more than 30 years ago,1 the concept of desirable difficulties suggests that learning experiences that are initially challenging significantly enhance long-term retention and mastery of skills, but trainees unknowingly perceive just the opposite being true…creating a chasm between what learners prefer, and what actually works (see Figure 1).

Desirable difficulties: A cognitive bias in learning

Desirable difficulties are learning experiences that require reflective effort, making them seem challenging in the short term yet beneficial for long-term learning and performance. Examples include varied practice conditions, spacing learning sessions over time, generating reflection through behaviorally designed learning moments, and embracing testing as a critical learning tool rather than merely as a means of measuring (judging) learners.

To understand the root disconnect of desirable difficulties we must acknowledge the work of Daniel Kahneman and Amos Tversky who demonstrated that we (adult humans) are routinely victims of cognitive biases and fast and slow thinking.2 Kahneman/Tversky highlight how our intuition (fast thinking) often leads us to prefer simple or familiar strategies. However, in learning, these comfortable strategies, unlike more effortful strategies, are far less effective.

In research more specific to clinician learning, David Davis’ exploration of clinician self-assessment highlighted a related challenge: clinicians overestimate their abilities and learning needs.3

The integration of desirable difficulties can also help correct these misjudgments by providing feedback that is more aligned with actual performance, thus fostering better self-awareness and more targeted learning.

Four strategies for implementing desirable difficulties in training

One of the main challenges with embracing desirable difficulties in training is the initial perception that these methods are less effective because they make learning feel harder.

This perception can discourage learners and educators alike. However, research is conclusive—while performance may initially seem more challenging, long-term retention and the ability to apply knowledge and skills are significantly improved.

To effectively implement desirable difficulties in clinical research-related training, consider the following strategies:

  1. Interleaved practice. Instead of focusing on one topic at a time, mix different subjects within a learning experience. This approach helps learners better discriminate between concepts and to apply knowledge appropriately in practice.4
  2. Spacing effect. Distribute learning experiences over time rather than combining them into a singular prolonged session. This strategy enhances memory consolidation and recall.5
  3. Testing as learning. Use frequent low-stakes testing not just to assess, but as a tool to strengthen memory and identify areas needing improvement. Leverage pre-tests to shape learning and utilize reflective poll questions throughout the training experiences.6
  4. Engineer reflective learning moments. Learners repeatedly default to low attention, low reflection states while learning. Creating a consistent rhythm of “nudges” that drive reflection can lead to four to six times greater learning and retention.7

Realizing the impact

If your goal is to effectively train and prepare staff and sites to successfully execute your drug development studies, incorporating desirable difficulties into clinical research-related training is not optional—it is how effective learning happens. More than 30 years of research evidence supports the effectiveness of desirable difficulties in enhancing professional knowledge, competence, and skill; and it is increasingly being embraced by our partners, including pharmaceutical company sponsors and contract research organizations, in their trial-related training.

By challenging ourselves to embrace these “difficulties,” we will find that things actually get a whole lot easier.

Brian S. McGowan, PhD, FACEHP, is Chief Learning Officer and Co-Founder at ArcheMedX, Inc.

References

1. Bjork, R.A. Memory and Metamemory Considerations in the Training of Human Beings. Metcalfe, J.; and Shimamura, A. (Eds.). 1994. Metacognition: Knowing about Knowing (pp. 185-205). https://drive.google.com/file/d/1QS48Q9Sg07k20uTPd3pjduswwHqTj9DD/view?pli=1

2. Kahneman, D. (Author); Egan, P. (Narrator). Thinking, Fast and Slow. Random House Audio. 2011. http://www.bit.ly/3zXJdZi

3. Davis, D.A.; Mazmanian, P.E.; Fordis, M.; et al. Accuracy of Physician Self-Assessment Compared With Observed Measures of Competence. JAMA. 2006. 296 (9), 1094-1102. https://drive.google.com/file/d/1cMBpFDUNVr74dNdfgFzdq7ENcfQVahcY/view

4. Van Hoof, T.J.; Sumeracki, M.A.; Madan, C.R. Science of Learning Strategy Series: Article 3, Interleaving. JCEHP. 2022. 42 (4), 265-268. https://drive.google.com/file/d/18Jd14COH0UfQ5c4TJcb-DCBrHKy5pLgy/view

5. Van Hoof, T.J.; Sumeracki, M.A.; Madan, C.R. Science of Learning Strategy Series: Article 1, Distributed Practice. JCEHP. 2021. 41 (1), 59-62. https://drive.google.com/file/d/18MZnQcaZd8Yfi_dRPkpYliuElONZcju3/view

6. Van Hoof, T.J.; Sumeracki, M.A.; Madan, C.R. Science of Learning Strategy Series: Article 2, Retrieval Practice. JCEHP. 2021. 41 (2), 119-123https://drive.google.com/file/d/18MVcNHOtL8Pfj1V7xYHgINkyl0dsfwC2/view

7. Alliance Almanac. The Alliance for Continuing Education in the Health Professions. December 2014. https://drive.google.com/file/d/1gTej_TiOLxSsZoB58MI6x-S7wPEOJJQA/view

Top 10 Lessons from the Pioneers of Behavioral Science in Healthcare

Earlier this month I had the honor of presenting our research at the 2023 Nudges In Healthcare Symposium and I can not recommend the experience strongly enough. Dating back to 2018 University of Pennsylvania’s Nudge Unit and Center for Health Incentives & Behavioral Economics have hosted clinician researchers from around the world, providing an opportunity to share their research, attend Keynote lectures from leading behavioral scientists, and engage in thought provoking workshops.

While it’s incredibly valuable to spend two days interacting with front line clinicians designing and implementing scalable improvement projects in healthcare through behavioral science, it’s truly transformational to explore these projects, their strengths, weaknesses, and opportunities at scale.

Perhaps the hardest part of writing this post is to reduce down all of the incredible learnings I took away from the conference, but let me try to summarize this experience with my Top 10 Lessons from the Pioneers of Behavioral Science in Healthcare.

  1. The community of scientists demonstrating the impact of behavioral science in healthcare is truly global – many of the best sessions and posters were from Israel, Ireland, and the Middle East. We are well beyond ‘early adoption’ and are likely at the tipping point of this science.
  2. From Elizabeth Linos, Associate Professor of Public Policy and Management at Harvard Kennedy School, “The strongest predictor of adoption of any nudge is whether it was designed as a wholly new process (less effective) or integrated into an existing process (more effective).” We need to infuse behavioral interventions into workflows versus creating new workflow, new behaviors, and new complexity.
  3. Also from Dr. Linos, “When your goal is to change policy and implement nudges, you need to include the policy changers in the room at the design of nudge experiment/pilot.” Like any other change management strategy, getting buy in is incredibly important – don’t lose the forest through the trees.
  4. From scientists at Clalit Health Services in Israel, nudge-based interventions reduced no show appointments by 33%, increased proactive cancellations by 17%, and freed up nearly 200,000 appointments per year. I think the impact we can have on clinical trials is likely to be far better than what has been seen in general healthcare.
  5. Reminded of an infamous quote by Max Tetlock, “If you don’t get feedback, your confidence grows at a much faster rate than your accuracy.” This applies to every behavioral science-based intervention we design – plan, do, study, adapt!
  6. From the team at Vanderbilt Biomedical Informatics, “It can often be just as important to nudge team members to STOP behaviors as it is to nudge them to act.” This might be one of the most lasting lessons for me.
  7. David Asch, Senior Vice Dean for Strategic Initiatives, Penn Perelman School of Medicine, introduced the Day Two keynote in the following way,  “Without Kevin Volpp there may not be nudge science and behavior economics in healthcare….but there definitely wouldn’t be a Penn Nudge Unit.” That is a strong statement, but wholly accurate. Kevin and his team are the true pioneers on behavioral science in healthcare.
  8. From Michelle Mayer, Associate Professor and Chair of the Department of Bioethics and Decision Sciences at Geisinger Health System, “At the heart ofa learning healthcare system is a commitment to experiment…stop assuming you know what is best…” We are at the beginning of a behavioral science revolution in healthcare and clinical research, and experimentation is critical to our success.
  9. Our data set of 600,000 learners and 25,000,000+ learning events modeled as behaviors was one of, if not the largest data set shared at the conference, and the reaction was overwhelming. This was a conference of behavioral scientists reacting to the novel application of behavioral science to enhance learning – as the designer of our behavioral model at ArcheMedX, this was a wonderful validation!
  10. More generally, research and evidence generated by the global community of behavioral scientists driven to improve healthcare quality, is critically relevant and applicable to our progress as a clinical research community. The lessons shared at the 2023 Penn Nudges in Healthcare Symposium presents us with a roadmap for transforming clinical trial effectiveness, if we choose to listen.

With all of this evidence, the challenge for each of us is to begin to apply these lessons to improve our trials. The usual hesitancy to implement change is how and where to start. Fortunately for our colleagues across the clinical research community, we are leading a virtual event in October that covers this very topic.

Please join us on Thursday October 26th at 1pm EST / 10am PST / 7pm CET as we lay the groundwork and provide real world examples of how our community – clinical research professionals from industry, CROs, and clinical research sites around the world – can benefit from leveraging behavioral science and nudges to achieve operational excellence in clinical trial execution.

For clinical trial leaders, the application of behavioral science offers a transformative approach. Whether it’s refining study team training, optimizing site selection, enhancing patient recruitment strategies, ensuring meticulous vendor oversight, or elevating site monitoring processes, behavioral science holds the key to unlocking operational excellence.

If we embrace the change management inherent in every clinical trial, we can learn how to turn operational challenges into opportunities with the power of behavioral science. In this webinar, discover how leveraging behavioral science can be a game-changer in addressing the unique challenges that each clinical trial presents. Learn how other clinical trial leaders like you changed critical behaviors that improved how their sites and teams conduct clinical trials.

Brian McGowan, PhD

Chief Learning Officer & Co-Founder

ArcheMedX