Author: Brian S McGowan, PhD

The Final Frontier of Learning: A Walk Through Space (and Time)

One of the best analogies on learning that I ever learned came in a rather unlikely place.

I had just walked from my car to meet a PGA Top 100 golf instructor (John Dunigan), he was going to help me work through some issues with my game. 15 minutes into the lessons John could tell that I was getting frustrated with my swing changes and he asked me to join him on a quick walk. I put down my clubs and we began walking back towards my car. About half-way there John stopped me – the scene was not much different than the picture to the right – we were standing in the woods on a path that had been worn by years and years of golfers walking back-and-forth from the parking lot to the driving range. 

John said, “Imagine for a second what this path looked like the first day a golfer found this short cut to the range… now compare that to today…” [Perhaps you can picture it?]

John continued, “This time, imagine what it would have looked like had no one ever repeated the trek…or if it was only traveled once a year?”

That 5-minutes spent on the walking path through the woods has stuck with me for years. And for years I have used the imagery of the worn path to help others understand why learning is rarely, if ever, immediate – instead, it is the end-product of spacing, time, and retrieval.

The first time a learner is confronted with new information, it is like the first golfer walking through the woods near the range. While some grass and twigs get trampled, the path may even look slightly worse than before. But in a matter of days the grass is likely to regrow and there will be no visible path. If the golfer returns the next day, the next week, the next month….over time, the path is worn in and becomes permanent. 

Neurobiologically, the first time new information is consumed it is like the first walk through the woods – and neural networks are weakly formed and the knowledge is tenuous. If the new information is not revisited, the networks weaken and a learner’s ability to retrieve the information (to follow the path) is lost. However, if the learner is re-exposed to the information, if they are allowed to confront the limitations of their knowledge, if they are presented with reminders or educational boosts; one-off learning experiences become engrained neural networks, and strong, efficient retrieval is made possible (AKA, true learning; retention).

At ArcheMedX we have long recognized that learning is often inefficient and unsuccessful if the learner does not take the right actions (i.e., reflecting, taking notes, searching, etc) at the right time. And one of the most critical levers to learning is to ensure a learner will be re-exposed to new content/information over time; with each subsequent experience or new exposure, the neural networks strengthen and the worn path is formed.

Now think about the startup phase of your clinical trial…how much of the success of your study is dependent on project staff and/or site personnel learning new information or skills and applying them correctly throughout the course of the trial. 

Now reflect on how you or your organization have traditionally supported this critical learning. 

Could you invest the time and resources necessary to ensure critical skills and knowledge were ingrained in the minds of staff and site personnel like the well worn path becomes through repetition, or did most staff and sites rush through training, paying little attention to critical details, like the lightly trampled grass and twigs that is quickly overgrown again in the days that follow?

Taking the same old approach to learning leads to costly delays and deviations, especially as staff and sites face increasingly complex protocols and recruitment challenges. The sooner we realize that learning is a journey that cannot be completed in a single, frantic race to initiate a site, the more we can ensure that we really are effectively prepared to conduct the trial. 

If you are interested in learning more about the science of spacing and retrieval, let me know!

If you are interested in fixing your golf game, maybe we can meet up on that worn path and spend some time with John 😉…

The Knowing-vs-Doing Challenge

We know we should workout, but who doesn’t have an unused gym membership or a basement treadmill collecting dust.

We know we should eat better, but who hasn’t stared at an empty container of Ben and Jerry’s while binge watching Game of Thrones.

We all know we should get more sleep, but who hasn’t found themselves scrolling through emails at 11:30 at night.

The reality is that knowing very rarely equals doing.

In clinical research, the knowing-vs-doing challenge rears its ugly head all of the time. CRAs and CRCs may think they know the ins-and-outs of a study, but it doesn’t mean they are ready to successfully initiate a site. PIs and Sub-Is may believe they understand the protocol, but it doesn’t mean they are ready to successfully identify and recruit suitable patients. Study site personnel may be familiar with the study binder, but it doesn’t mean they are ready to mitigate challenges and avoid major protocol deviations.

The weak link in the knowing-vs-doing chain is that individuals and teams must be ready to take action (to ’do’). And evidence generated over the past 30 years suggests that “readiness” is predicated on a complex suite of cognitive mindsets (e.g., confidence, reflection, intention…etc.) that either catalyze or prohibit action.

So the next time you are confronted with Project staff, PIs or Site Personnel who have been “trained” on a protocol but are struggling to meet goals or avoid deviations; ask yourself if the mindset(s) of the individuals or teams is working for or against your goals.

If you can’t connect the Knowing -> Readiness/Mindset -> Doing dots, then you are mi$$ing a huge piece of the puzzle.

Are You Ready to Learn at DIA?

There are few universal truths in life, but one we might all agree on is that despite our best intentions, we can’t remember EVERYTHING…especially when we are bouncing from session to session and meeting to meeting at a marathon event like the DIA Global Annual Meeting.

In reality, to make the most of your experience at DIA you should try to leverage a few best practices that will ensure you are READY to learn!

1. Are you ready to focus? Prior to leaving for DIA take the time to research and target the sessions and exhibitors you really need. You can’t participate in every discussion or stop by every booth, so take the time to prepare by reviewing the education schedule and the exhibit hall floor plan to highlight those that best align with your goals!

2. Are you ready to take notes? With each session you might find 5-10 critical lessons – WRITE THEM DOWN – but more importantly don’t forget the context. With each note you take be sure to document the session and the speaker(s) and even snap a picture if allowed…these threads will provide context and allow you to reflect, revisit, and retain critical lessons from each session at DIA.

3. Are you ready to ask questions? The easiest path to learning is to ask what’s really on your mind. You made the commitment (and investment in time and money) to attend, so capitalize on it. Speakers are there to teach and facilitate. Exhibitors want to engage. They all want to hear your questions. So when you have questions percolating in your working memory – don’t be shy, ask them!

4. Are you ready to share? Keep in mind others are learning with you! Research published in 2010 suggests that often the most impactful and actionable lessons from a professional meeting occur through the perspectives and experiences shared between attendees. Make an effort to have 2-3 serendipitous conversations a day.

5. Are you ready to take action? Don’t trust your memory…once you are back in the office following DIA, you WILL forget much of what you learned and most of what you wanted to do. So, before you leave San Diego put time in your calendar to revisit your notes, questions, and conversations and define and document your action items…before it’s too late!

If you embrace these best practices, you can maximize your time at DIA to generate the ideas and lessons that ensure you and your organization are Ready to succeed!

And don’t forget to visit the ArcheMedX team in the Innovators Hub (booth #2101) at DIA.

ABSTRACT: Retrieval practice enhances the ability to evaluate complex physiology information

Objective:

Many investigations have shown that retrieval practice enhances the recall of different types of information, including both medical and physiological, but the effects of the strategy on higher‐order thinking, such as evaluation, are less clear. The primary aim of this study was to compare how effectively retrieval practice and repeated studying (i.e. reading) strategies facilitated the evaluation of two research articles that advocated dissimilar conclusions. A secondary aim was to determine if that comparison was affected by using those same strategies to first learn important contextual information about the articles.

Methods

Participants were randomly assigned to learn three texts that provided background information about the research articles either by studying them four consecutive times (Text‐S) or by studying and then retrieving them two consecutive times (Text‐R). Half of both the Text‐S and Text‐R groups were then randomly assigned to learn two physiology research articles by studying them four consecutive times (Article‐S) and the other half learned them by studying and then retrieving them two consecutive times (Article‐R). Participants then completed two assessments: the first tested their ability to critique the research articles and the second tested their recall of the background texts.

Results

On the article critique assessment, the Article‐R groups’ mean scores of 33.7 ± 4.7% and 35.4 ± 4.5% (Text‐R then Article‐R group and Text‐S then Article‐R group, respectively) were both significantly (p < 0.05) higher than the two Article‐S mean scores of 19.5 ± 4.4% and 21.7 ± 2.9% (Text‐S then Article‐S group and Text‐R then Article‐S group, respectively). There was no difference between the two Article‐R groups on the article critique assessment, indicating those scores weren’t affected by the different contextual learning strategies.ConclusionRetrieval practice promoted superior critical evaluation of the research articles, and the results also indicated the strategy enhanced the recall of background information.

via Retrieval practice enhances the ability to evaluate complex physiology information – Dobson – 2018 – Medical Education – Wiley Online Library.

RESOURCE: Neuroscience and How Students Learn

Neuroscience fundamentals
Changing the brain: For optimal learning to occur, the brain needs conditions under which it is able to change in response to stimuli (neuroplasticity) and able to produce new neurons (neurogenesis).

The most effective learning involves recruiting multiple regions of the brain for the learning task. These regions are associated with such functions as memory, the various senses, volitional control, and higher levels of cognitive functioning.

Moderate stress: Stress and performance are related in an “inverted U curve” (see right). Stimulation to learn requires a moderate amount of stress (measured in the level of cortisol). A low degree of stress is associated with low performance, as is high stress, which can set the system into fight-or-flight mode so there is less brain activity in the cortical areas where higher-level learning happens. Moderate levels of cortisol tend to correlate with the highest performance on tasks of any type. We can therefore conclude that moderate stress is beneficial for learning, while mild and extreme stress are both detrimental to learning.

via Neuroscience and How Students Learn | GSI Teaching & Resource Center.

RESOURCE: Brain circuit helps us learn by watching others

It’s often said that experience is the best teacher, but the experiences of other people may be even better. If you saw a friend get chased by a neighborhood dog, for instance, you would learn to stay away from the dog without having to undergo that experience yourself.This kind of learning, known as observational learning, offers a major evolutionary advantage, says Kay Tye, an MIT associate professor of brain and cognitive sciences and a member of MIT’s Picower Institute for Learning and Memory.“So much of what we learn day-to-day is through observation,” she says. “Especially for something that is going to potentially hurt or kill you, you could imagine that the cost of learning it firsthand is very high. The ability to learn it through observation is extremely adaptive, and gives a major advantage for survival.”Tye and her colleagues at MIT have now identified the brain circuit that is required for this kind of learning. This circuit, which is distinct from the brain network used to learn from firsthand experiences, relies on input from a part of the brain responsible for interpreting social cues.Former MD/PhD student Stephen Allsop, along with Romy Wichmann, Fergil Mills, and Anthony Burgos-Robles co-led this study, which appears in the May 3 issue of Cell.

via Brain circuit helps us learn by watching others | MIT News.

CLASSIC: Association Between Funding and Quality of Published Medical Education Research

Context Methodological shortcomings in medical education research are often attributed to insufficient funding, yet an association between funding and study quality has not been established.

Objectives To develop and evaluate an instrument for measuring the quality of education research studies and to assess the relationship between funding and study quality.

Design, Setting, and Participants Internal consistency, interrater and intrarater reliability, and criterion validity were determined for a 10-item medical education research study quality instrument (MERSQI). This was applied to 210 medical education research studies published in 13 peer-reviewed journals between September 1, 2002, and December 31, 2003. The amount of funding obtained per study and the publication record of the first author were determined by survey.

Main Outcome Measures Study quality as measured by the MERSQI (potential maximum total score, 18; maximum domain score, 3), amount of funding per study, and previous publications by the first author.

Results The mean MERSQI score was 9.95 (SD, 2.34; range, 5-16). Mean domain scores were highest for data analysis (2.58) and lowest for validity (0.69). Intraclass correlation coefficient ranges for interrater and intrarater reliability were 0.72 to 0.98 and 0.78 to 0.998, respectively. Total MERSQI scores were associated with expert quality ratings (Spearman ρ, 0.73; 95% confidence interval [CI], 0.56-0.84; P < .001), 3-year citation rate (0.8 increase in score per 10 citations; 95% CI, 0.03-1.30; P = .003), and journal impact factor (1.0 increase in score per 6-unit increase in impact factor; 95% CI, 0.34-1.56; P = .003). In multivariate analysis, MERSQI scores were independently associated with study funding of $20 000 or more (0.95 increase in score; 95% CI, 0.22-1.86; P = .045) and previous medical education publications by the first author (1.07 increase in score per 20 publications; 95% CI, 0.15-2.23; P = .047).

Conclusion The quality of published medical education research is associated with study funding.

via Association Between Funding and Quality of Published Medical Education Research | Medical Education and Training | JAMA | JAMA Network.

RESOURCE: Students’ Approaches to Learning | John Biggs

In 1976, Swedish researchers Ference Marton and Roger Saljö demonstrated that students learn not what teachers think they should learn, but what students perceive the task to demand of them. Students using a ‘surface’ approach see a task as requiring specific answers to questions, so they rote learn bits and pieces; students using a ‘deep’ approach want to understand, so they focus on themes and main ideas.

My own take on this was to develop questionnaires assessing approaches to learning, the Learning Process Questionnaire (LPQ for school students) and the Study Process Questionnaire (SPQ for tertiary students) to assess students’ use of these approaches, with the addition of an ‘achieving’ approach, which students use to maximise grades. The following article summarises my work on this: ‘The role of metalearning in study processes’ (British Journal of Educational Psychology, 55, 185-212, 1985).

The Revised Study Process Questionnaire (R-SPQ-2F), uses only surface and deep motives and strategies, and with total approach scores. It, with explanatory article, can be downloaded free of charge and used for research purposes as long as it is acknowledged in the usual way. Please note that the R-SPQ-2F is designed to reflect students’ approaches to learning in their current teaching context, so it is an instrument to evaluate teaching rather than one that characterises students as “surface learners” or “deep learners”. The earlier instrument had been used also to label students (he is a surface learner and she is a deep learner) but I now think that that is inappropriate. I have had a lot of correspondence from researchers who want to use the instrument for labelling students, that is as an independent variable, but it should not be so used; it provides a set of dependent variables that may be used for assessing teaching.

via Students’ Approaches to Learning | John Biggs.

MANUSCRIPT: Can cognitive processes help explain the success of instructional techniques recommended by behavior analysts?

The fields of cognitive psychology and behavior analysis have undertaken separate investigations into effective learning strategies. These studies have led to several recommendations from both fields regarding teaching techniques that have been shown to enhance student performance. While cognitive psychology and behavior analysis have studied student performance independently from their different perspectives, the recommendations they make are remarkably similar. The lack of discussion between the two fields, despite these similarities, is surprising. The current paper seeks to remedy this oversight in two ways: first, by reviewing two techniques recommended by behavior analysts—guided notes and response cards—and comparing them to their counterparts in cognitive psychology that are potentially responsible for their effectiveness; and second, by outlining some other areas of overlap that could benefit from collaboration. By starting the discussion with the comparison of two specific recommendations for teaching techniques, we hope to galvanize a more extensive collaboration that will not only further the progression of both fields, but also extend the practical applications of the ensuing research.

via Can cognitive processes help explain the success of instructional techniques recommended by behavior analysts? | npj Science of Learning.

MANUSCRIPT: Can elearning be used to teach palliative care? – medical students’ acceptance, knowledge, and self-estimation of competence in palliative care after elearning

Background
Undergraduate palliative care education (UPCE) was mandatorily incorporated in medical education in Germany in 2009. Implementation of the new cross-sectional examination subject of palliative care (QB13) continues to be a major challenge for medical schools. It is clear that there is a need among students for more UPCE. On the other hand, there is a lack of teaching resources and patient availabilities for the practical lessons. Digital media and elearning might be one solution to this problem. The primary objective of this study is to evaluate the elearning course Palliative Care Basics, with regard to students’ acceptance of this teaching method and their performance in the written examination on the topic of palliative care. In addition, students’ self-estimation in competence in palliative care was assessed.

Methods
To investigate students’ acceptance of the elearning course Palliative Care Basics, we conducted a cross-sectional study that is appropriate for proof-of-concept evaluation. The sample consisted of three cohorts of medical students of Heinrich Heine University Dusseldorf (N = 670). The acceptance of the elearning approach was investigated by means of the standard evaluation of Heinrich Heine University. The effect of elearning on students’ self-estimation in palliative care competencies was measured by means of the German revised version of the Program in Palliative Care Education and Practice Questionnaire (PCEP-GR).

Results
The elearning course Palliative Care Basics was well-received by medical students. The data yielded no significant effects of the elearning course on students’ self-estimation in palliative care competencies. There was a trend of the elearning course having a positive effect on the mark in written exam.

Conclusions
Elearning is a promising approach in UPCE and well-accepted by medical students. It may be able to increase students’ knowledge in palliative care. However, it is likely that there are other approaches needed to change students’ self-estimation in palliative care competencies. It seems plausible that experience-based learning and encounters with dying patients and their relatives are required to increases students’ self-estimation in palliative care competencies.

via Can elearning be used to teach palliative care? – medical students’ acceptance, knowledge, and self-estimation of competence in palliative care after elearning | BMC Medical Education | Full Text.