Author: Brian S McGowan, PhD

The Knowing-vs-Doing Challenge

We know we should workout, but who doesn’t have an unused gym membership or a basement treadmill collecting dust.

We know we should eat better, but who hasn’t stared at an empty container of Ben and Jerry’s while binge watching Game of Thrones.

We all know we should get more sleep, but who hasn’t found themselves scrolling through emails at 11:30 at night.

The reality is that knowing very rarely equals doing.

In clinical research, the knowing-vs-doing challenge rears its ugly head all of the time. CRAs and CRCs may think they know the ins-and-outs of a study, but it doesn’t mean they are ready to successfully initiate a site. PIs and Sub-Is may believe they understand the protocol, but it doesn’t mean they are ready to successfully identify and recruit suitable patients. Study site personnel may be familiar with the study binder, but it doesn’t mean they are ready to mitigate challenges and avoid major protocol deviations.

The weak link in the knowing-vs-doing chain is that individuals and teams must be ready to take action (to ’do’). And evidence generated over the past 30 years suggests that “readiness” is predicated on a complex suite of cognitive mindsets (e.g., confidence, reflection, intention…etc.) that either catalyze or prohibit action.

So the next time you are confronted with Project staff, PIs or Site Personnel who have been “trained” on a protocol but are struggling to meet goals or avoid deviations; ask yourself if the mindset(s) of the individuals or teams is working for or against your goals.

If you can’t connect the Knowing -> Readiness/Mindset -> Doing dots, then you are mi$$ing a huge piece of the puzzle.

Are You Ready to Learn at DIA?

There are few universal truths in life, but one we might all agree on is that despite our best intentions, we can’t remember EVERYTHING…especially when we are bouncing from session to session and meeting to meeting at a marathon event like the DIA Global Annual Meeting.

In reality, to make the most of your experience at DIA you should try to leverage a few best practices that will ensure you are READY to learn!

1. Are you ready to focus? Prior to leaving for DIA take the time to research and target the sessions and exhibitors you really need. You can’t participate in every discussion or stop by every booth, so take the time to prepare by reviewing the education schedule and the exhibit hall floor plan to highlight those that best align with your goals!

2. Are you ready to take notes? With each session you might find 5-10 critical lessons – WRITE THEM DOWN – but more importantly don’t forget the context. With each note you take be sure to document the session and the speaker(s) and even snap a picture if allowed…these threads will provide context and allow you to reflect, revisit, and retain critical lessons from each session at DIA.

3. Are you ready to ask questions? The easiest path to learning is to ask what’s really on your mind. You made the commitment (and investment in time and money) to attend, so capitalize on it. Speakers are there to teach and facilitate. Exhibitors want to engage. They all want to hear your questions. So when you have questions percolating in your working memory – don’t be shy, ask them!

4. Are you ready to share? Keep in mind others are learning with you! Research published in 2010 suggests that often the most impactful and actionable lessons from a professional meeting occur through the perspectives and experiences shared between attendees. Make an effort to have 2-3 serendipitous conversations a day.

5. Are you ready to take action? Don’t trust your memory…once you are back in the office following DIA, you WILL forget much of what you learned and most of what you wanted to do. So, before you leave San Diego put time in your calendar to revisit your notes, questions, and conversations and define and document your action items…before it’s too late!

If you embrace these best practices, you can maximize your time at DIA to generate the ideas and lessons that ensure you and your organization are Ready to succeed!

And don’t forget to visit the ArcheMedX team in the Innovators Hub (booth #2101) at DIA.

ABSTRACT: Retrieval practice enhances the ability to evaluate complex physiology information

Objective:

Many investigations have shown that retrieval practice enhances the recall of different types of information, including both medical and physiological, but the effects of the strategy on higher‐order thinking, such as evaluation, are less clear. The primary aim of this study was to compare how effectively retrieval practice and repeated studying (i.e. reading) strategies facilitated the evaluation of two research articles that advocated dissimilar conclusions. A secondary aim was to determine if that comparison was affected by using those same strategies to first learn important contextual information about the articles.

Methods

Participants were randomly assigned to learn three texts that provided background information about the research articles either by studying them four consecutive times (Text‐S) or by studying and then retrieving them two consecutive times (Text‐R). Half of both the Text‐S and Text‐R groups were then randomly assigned to learn two physiology research articles by studying them four consecutive times (Article‐S) and the other half learned them by studying and then retrieving them two consecutive times (Article‐R). Participants then completed two assessments: the first tested their ability to critique the research articles and the second tested their recall of the background texts.

Results

On the article critique assessment, the Article‐R groups’ mean scores of 33.7 ± 4.7% and 35.4 ± 4.5% (Text‐R then Article‐R group and Text‐S then Article‐R group, respectively) were both significantly (p < 0.05) higher than the two Article‐S mean scores of 19.5 ± 4.4% and 21.7 ± 2.9% (Text‐S then Article‐S group and Text‐R then Article‐S group, respectively). There was no difference between the two Article‐R groups on the article critique assessment, indicating those scores weren’t affected by the different contextual learning strategies.ConclusionRetrieval practice promoted superior critical evaluation of the research articles, and the results also indicated the strategy enhanced the recall of background information.

via Retrieval practice enhances the ability to evaluate complex physiology information – Dobson – 2018 – Medical Education – Wiley Online Library.

RESOURCE: Neuroscience and How Students Learn

Neuroscience fundamentals
Changing the brain: For optimal learning to occur, the brain needs conditions under which it is able to change in response to stimuli (neuroplasticity) and able to produce new neurons (neurogenesis).

The most effective learning involves recruiting multiple regions of the brain for the learning task. These regions are associated with such functions as memory, the various senses, volitional control, and higher levels of cognitive functioning.

Moderate stress: Stress and performance are related in an “inverted U curve” (see right). Stimulation to learn requires a moderate amount of stress (measured in the level of cortisol). A low degree of stress is associated with low performance, as is high stress, which can set the system into fight-or-flight mode so there is less brain activity in the cortical areas where higher-level learning happens. Moderate levels of cortisol tend to correlate with the highest performance on tasks of any type. We can therefore conclude that moderate stress is beneficial for learning, while mild and extreme stress are both detrimental to learning.

via Neuroscience and How Students Learn | GSI Teaching & Resource Center.

RESOURCE: Brain circuit helps us learn by watching others

It’s often said that experience is the best teacher, but the experiences of other people may be even better. If you saw a friend get chased by a neighborhood dog, for instance, you would learn to stay away from the dog without having to undergo that experience yourself.This kind of learning, known as observational learning, offers a major evolutionary advantage, says Kay Tye, an MIT associate professor of brain and cognitive sciences and a member of MIT’s Picower Institute for Learning and Memory.“So much of what we learn day-to-day is through observation,” she says. “Especially for something that is going to potentially hurt or kill you, you could imagine that the cost of learning it firsthand is very high. The ability to learn it through observation is extremely adaptive, and gives a major advantage for survival.”Tye and her colleagues at MIT have now identified the brain circuit that is required for this kind of learning. This circuit, which is distinct from the brain network used to learn from firsthand experiences, relies on input from a part of the brain responsible for interpreting social cues.Former MD/PhD student Stephen Allsop, along with Romy Wichmann, Fergil Mills, and Anthony Burgos-Robles co-led this study, which appears in the May 3 issue of Cell.

via Brain circuit helps us learn by watching others | MIT News.

CLASSIC: Association Between Funding and Quality of Published Medical Education Research

Context Methodological shortcomings in medical education research are often attributed to insufficient funding, yet an association between funding and study quality has not been established.

Objectives To develop and evaluate an instrument for measuring the quality of education research studies and to assess the relationship between funding and study quality.

Design, Setting, and Participants Internal consistency, interrater and intrarater reliability, and criterion validity were determined for a 10-item medical education research study quality instrument (MERSQI). This was applied to 210 medical education research studies published in 13 peer-reviewed journals between September 1, 2002, and December 31, 2003. The amount of funding obtained per study and the publication record of the first author were determined by survey.

Main Outcome Measures Study quality as measured by the MERSQI (potential maximum total score, 18; maximum domain score, 3), amount of funding per study, and previous publications by the first author.

Results The mean MERSQI score was 9.95 (SD, 2.34; range, 5-16). Mean domain scores were highest for data analysis (2.58) and lowest for validity (0.69). Intraclass correlation coefficient ranges for interrater and intrarater reliability were 0.72 to 0.98 and 0.78 to 0.998, respectively. Total MERSQI scores were associated with expert quality ratings (Spearman ρ, 0.73; 95% confidence interval [CI], 0.56-0.84; P < .001), 3-year citation rate (0.8 increase in score per 10 citations; 95% CI, 0.03-1.30; P = .003), and journal impact factor (1.0 increase in score per 6-unit increase in impact factor; 95% CI, 0.34-1.56; P = .003). In multivariate analysis, MERSQI scores were independently associated with study funding of $20 000 or more (0.95 increase in score; 95% CI, 0.22-1.86; P = .045) and previous medical education publications by the first author (1.07 increase in score per 20 publications; 95% CI, 0.15-2.23; P = .047).

Conclusion The quality of published medical education research is associated with study funding.

via Association Between Funding and Quality of Published Medical Education Research | Medical Education and Training | JAMA | JAMA Network.

RESOURCE: Students’ Approaches to Learning | John Biggs

In 1976, Swedish researchers Ference Marton and Roger Saljö demonstrated that students learn not what teachers think they should learn, but what students perceive the task to demand of them. Students using a ‘surface’ approach see a task as requiring specific answers to questions, so they rote learn bits and pieces; students using a ‘deep’ approach want to understand, so they focus on themes and main ideas.

My own take on this was to develop questionnaires assessing approaches to learning, the Learning Process Questionnaire (LPQ for school students) and the Study Process Questionnaire (SPQ for tertiary students) to assess students’ use of these approaches, with the addition of an ‘achieving’ approach, which students use to maximise grades. The following article summarises my work on this: ‘The role of metalearning in study processes’ (British Journal of Educational Psychology, 55, 185-212, 1985).

The Revised Study Process Questionnaire (R-SPQ-2F), uses only surface and deep motives and strategies, and with total approach scores. It, with explanatory article, can be downloaded free of charge and used for research purposes as long as it is acknowledged in the usual way. Please note that the R-SPQ-2F is designed to reflect students’ approaches to learning in their current teaching context, so it is an instrument to evaluate teaching rather than one that characterises students as “surface learners” or “deep learners”. The earlier instrument had been used also to label students (he is a surface learner and she is a deep learner) but I now think that that is inappropriate. I have had a lot of correspondence from researchers who want to use the instrument for labelling students, that is as an independent variable, but it should not be so used; it provides a set of dependent variables that may be used for assessing teaching.

via Students’ Approaches to Learning | John Biggs.

MANUSCRIPT: Can cognitive processes help explain the success of instructional techniques recommended by behavior analysts?

The fields of cognitive psychology and behavior analysis have undertaken separate investigations into effective learning strategies. These studies have led to several recommendations from both fields regarding teaching techniques that have been shown to enhance student performance. While cognitive psychology and behavior analysis have studied student performance independently from their different perspectives, the recommendations they make are remarkably similar. The lack of discussion between the two fields, despite these similarities, is surprising. The current paper seeks to remedy this oversight in two ways: first, by reviewing two techniques recommended by behavior analysts—guided notes and response cards—and comparing them to their counterparts in cognitive psychology that are potentially responsible for their effectiveness; and second, by outlining some other areas of overlap that could benefit from collaboration. By starting the discussion with the comparison of two specific recommendations for teaching techniques, we hope to galvanize a more extensive collaboration that will not only further the progression of both fields, but also extend the practical applications of the ensuing research.

via Can cognitive processes help explain the success of instructional techniques recommended by behavior analysts? | npj Science of Learning.

MANUSCRIPT: Can elearning be used to teach palliative care? – medical students’ acceptance, knowledge, and self-estimation of competence in palliative care after elearning

Background
Undergraduate palliative care education (UPCE) was mandatorily incorporated in medical education in Germany in 2009. Implementation of the new cross-sectional examination subject of palliative care (QB13) continues to be a major challenge for medical schools. It is clear that there is a need among students for more UPCE. On the other hand, there is a lack of teaching resources and patient availabilities for the practical lessons. Digital media and elearning might be one solution to this problem. The primary objective of this study is to evaluate the elearning course Palliative Care Basics, with regard to students’ acceptance of this teaching method and their performance in the written examination on the topic of palliative care. In addition, students’ self-estimation in competence in palliative care was assessed.

Methods
To investigate students’ acceptance of the elearning course Palliative Care Basics, we conducted a cross-sectional study that is appropriate for proof-of-concept evaluation. The sample consisted of three cohorts of medical students of Heinrich Heine University Dusseldorf (N = 670). The acceptance of the elearning approach was investigated by means of the standard evaluation of Heinrich Heine University. The effect of elearning on students’ self-estimation in palliative care competencies was measured by means of the German revised version of the Program in Palliative Care Education and Practice Questionnaire (PCEP-GR).

Results
The elearning course Palliative Care Basics was well-received by medical students. The data yielded no significant effects of the elearning course on students’ self-estimation in palliative care competencies. There was a trend of the elearning course having a positive effect on the mark in written exam.

Conclusions
Elearning is a promising approach in UPCE and well-accepted by medical students. It may be able to increase students’ knowledge in palliative care. However, it is likely that there are other approaches needed to change students’ self-estimation in palliative care competencies. It seems plausible that experience-based learning and encounters with dying patients and their relatives are required to increases students’ self-estimation in palliative care competencies.

via Can elearning be used to teach palliative care? – medical students’ acceptance, knowledge, and self-estimation of competence in palliative care after elearning | BMC Medical Education | Full Text.

MANUSCRIPT: Consensus on Quality Indicators of Postgraduate Medical E-Learning: Delphi Study

Background: The progressive use of e-learning in postgraduate medical education calls for useful quality indicators. Many evaluation tools exist. However, these are diversely used and their empirical foundation is often lacking.

Objective: We aimed to identify an empirically founded set of quality indicators to set the bar for “good enough” e-learning.

Methods: We performed a Delphi procedure with a group of 13 international education experts and 10 experienced users of e-learning. The questionnaire started with 57 items. These items were the result of a previous literature review and focus group study performed with experts and users. Consensus was met when a rate of agreement of more than two-thirds was achieved.

Results: In the first round, the participants accepted 37 items of the 57 as important, reached no consensus on 20, and added 15 new items. In the second round, we added the comments from the first round to the items on which there was no consensus and added the 15 new items. After this round, a total of 72 items were addressed and, of these, 37 items were accepted and 34 were rejected due to lack of consensus.

Conclusions: This study produced a list of 37 items that can form the basis of an evaluation tool to evaluate postgraduate medical e-learning. This is, to our knowledge, the first time that quality indicators for postgraduate medical e-learning have been defined and validated. The next step is to create and validate an e-learning evaluation tool from these items.

via JME-Consensus on Quality Indicators of Postgraduate Medical E-Learning: Delphi Study | de Leeuw | JMIR Medical Education.