Many investigations have shown that retrieval practice enhances the recall of different types of information, including both medical and physiological, but the effects of the strategy on higher‐order thinking, such as evaluation, are less clear. The primary aim of this study was to compare how effectively retrieval practice and repeated studying (i.e. reading) strategies facilitated the evaluation of two research articles that advocated dissimilar conclusions. A secondary aim was to determine if that comparison was affected by using those same strategies to first learn important contextual information about the articles.
Participants were randomly assigned to learn three texts that provided background information about the research articles either by studying them four consecutive times (Text‐S) or by studying and then retrieving them two consecutive times (Text‐R). Half of both the Text‐S and Text‐R groups were then randomly assigned to learn two physiology research articles by studying them four consecutive times (Article‐S) and the other half learned them by studying and then retrieving them two consecutive times (Article‐R). Participants then completed two assessments: the first tested their ability to critique the research articles and the second tested their recall of the background texts.
On the article critique assessment, the Article‐R groups’ mean scores of 33.7 ± 4.7% and 35.4 ± 4.5% (Text‐R then Article‐R group and Text‐S then Article‐R group, respectively) were both significantly (p < 0.05) higher than the two Article‐S mean scores of 19.5 ± 4.4% and 21.7 ± 2.9% (Text‐S then Article‐S group and Text‐R then Article‐S group, respectively). There was no difference between the two Article‐R groups on the article critique assessment, indicating those scores weren’t affected by the different contextual learning strategies.ConclusionRetrieval practice promoted superior critical evaluation of the research articles, and the results also indicated the strategy enhanced the recall of background information.
Changing the brain: For optimal learning to occur, the brain needs conditions under which it is able to change in response to stimuli (neuroplasticity) and able to produce new neurons (neurogenesis).
The most effective learning involves recruiting multiple regions of the brain for the learning task. These regions are associated with such functions as memory, the various senses, volitional control, and higher levels of cognitive functioning.
Moderate stress: Stress and performance are related in an “inverted U curve” (see right). Stimulation to learn requires a moderate amount of stress (measured in the level of cortisol). A low degree of stress is associated with low performance, as is high stress, which can set the system into fight-or-flight mode so there is less brain activity in the cortical areas where higher-level learning happens. Moderate levels of cortisol tend to correlate with the highest performance on tasks of any type. We can therefore conclude that moderate stress is beneficial for learning, while mild and extreme stress are both detrimental to learning.
It’s often said that experience is the best teacher, but the experiences of other people may be even better. If you saw a friend get chased by a neighborhood dog, for instance, you would learn to stay away from the dog without having to undergo that experience yourself.This kind of learning, known as observational learning, offers a major evolutionary advantage, says Kay Tye, an MIT associate professor of brain and cognitive sciences and a member of MIT’s Picower Institute for Learning and Memory.“So much of what we learn day-to-day is through observation,” she says. “Especially for something that is going to potentially hurt or kill you, you could imagine that the cost of learning it firsthand is very high. The ability to learn it through observation is extremely adaptive, and gives a major advantage for survival.”Tye and her colleagues at MIT have now identified the brain circuit that is required for this kind of learning. This circuit, which is distinct from the brain network used to learn from firsthand experiences, relies on input from a part of the brain responsible for interpreting social cues.Former MD/PhD student Stephen Allsop, along with Romy Wichmann, Fergil Mills, and Anthony Burgos-Robles co-led this study, which appears in the May 3 issue of Cell.
Context Methodological shortcomings in medical education research are often attributed to insufficient funding, yet an association between funding and study quality has not been established.
Objectives To develop and evaluate an instrument for measuring the quality of education research studies and to assess the relationship between funding and study quality.
Design, Setting, and Participants Internal consistency, interrater and intrarater reliability, and criterion validity were determined for a 10-item medical education research study quality instrument (MERSQI). This was applied to 210 medical education research studies published in 13 peer-reviewed journals between September 1, 2002, and December 31, 2003. The amount of funding obtained per study and the publication record of the first author were determined by survey.
Main Outcome Measures Study quality as measured by the MERSQI (potential maximum total score, 18; maximum domain score, 3), amount of funding per study, and previous publications by the first author.
Results The mean MERSQI score was 9.95 (SD, 2.34; range, 5-16). Mean domain scores were highest for data analysis (2.58) and lowest for validity (0.69). Intraclass correlation coefficient ranges for interrater and intrarater reliability were 0.72 to 0.98 and 0.78 to 0.998, respectively. Total MERSQI scores were associated with expert quality ratings (Spearman ρ, 0.73; 95% confidence interval [CI], 0.56-0.84; P < .001), 3-year citation rate (0.8 increase in score per 10 citations; 95% CI, 0.03-1.30; P = .003), and journal impact factor (1.0 increase in score per 6-unit increase in impact factor; 95% CI, 0.34-1.56; P = .003). In multivariate analysis, MERSQI scores were independently associated with study funding of $20 000 or more (0.95 increase in score; 95% CI, 0.22-1.86; P = .045) and previous medical education publications by the first author (1.07 increase in score per 20 publications; 95% CI, 0.15-2.23; P = .047).
Conclusion The quality of published medical education research is associated with study funding.
In 1976, Swedish researchers Ference Marton and Roger Saljö demonstrated that students learn not what teachers think they should learn, but what students perceive the task to demand of them. Students using a ‘surface’ approach see a task as requiring specific answers to questions, so they rote learn bits and pieces; students using a ‘deep’ approach want to understand, so they focus on themes and main ideas.
My own take on this was to develop questionnaires assessing approaches to learning, the Learning Process Questionnaire (LPQ for school students) and the Study Process Questionnaire (SPQ for tertiary students) to assess students’ use of these approaches, with the addition of an ‘achieving’ approach, which students use to maximise grades. The following article summarises my work on this: ‘The role of metalearning in study processes’ (British Journal of Educational Psychology, 55, 185-212, 1985).
The Revised Study Process Questionnaire (R-SPQ-2F), uses only surface and deep motives and strategies, and with total approach scores. It, with explanatory article, can be downloaded free of charge and used for research purposes as long as it is acknowledged in the usual way. Please note that the R-SPQ-2F is designed to reflect students’ approaches to learning in their current teaching context, so it is an instrument to evaluate teaching rather than one that characterises students as “surface learners” or “deep learners”. The earlier instrument had been used also to label students (he is a surface learner and she is a deep learner) but I now think that that is inappropriate. I have had a lot of correspondence from researchers who want to use the instrument for labelling students, that is as an independent variable, but it should not be so used; it provides a set of dependent variables that may be used for assessing teaching.
MANUSCRIPT: Can cognitive processes help explain the success of instructional techniques recommended by behavior analysts?
The fields of cognitive psychology and behavior analysis have undertaken separate investigations into effective learning strategies. These studies have led to several recommendations from both fields regarding teaching techniques that have been shown to enhance student performance. While cognitive psychology and behavior analysis have studied student performance independently from their different perspectives, the recommendations they make are remarkably similar. The lack of discussion between the two fields, despite these similarities, is surprising. The current paper seeks to remedy this oversight in two ways: first, by reviewing two techniques recommended by behavior analysts—guided notes and response cards—and comparing them to their counterparts in cognitive psychology that are potentially responsible for their effectiveness; and second, by outlining some other areas of overlap that could benefit from collaboration. By starting the discussion with the comparison of two specific recommendations for teaching techniques, we hope to galvanize a more extensive collaboration that will not only further the progression of both fields, but also extend the practical applications of the ensuing research.
MANUSCRIPT: Can elearning be used to teach palliative care? – medical students’ acceptance, knowledge, and self-estimation of competence in palliative care after elearning
Undergraduate palliative care education (UPCE) was mandatorily incorporated in medical education in Germany in 2009. Implementation of the new cross-sectional examination subject of palliative care (QB13) continues to be a major challenge for medical schools. It is clear that there is a need among students for more UPCE. On the other hand, there is a lack of teaching resources and patient availabilities for the practical lessons. Digital media and elearning might be one solution to this problem. The primary objective of this study is to evaluate the elearning course Palliative Care Basics, with regard to students’ acceptance of this teaching method and their performance in the written examination on the topic of palliative care. In addition, students’ self-estimation in competence in palliative care was assessed.
To investigate students’ acceptance of the elearning course Palliative Care Basics, we conducted a cross-sectional study that is appropriate for proof-of-concept evaluation. The sample consisted of three cohorts of medical students of Heinrich Heine University Dusseldorf (N = 670). The acceptance of the elearning approach was investigated by means of the standard evaluation of Heinrich Heine University. The effect of elearning on students’ self-estimation in palliative care competencies was measured by means of the German revised version of the Program in Palliative Care Education and Practice Questionnaire (PCEP-GR).
The elearning course Palliative Care Basics was well-received by medical students. The data yielded no significant effects of the elearning course on students’ self-estimation in palliative care competencies. There was a trend of the elearning course having a positive effect on the mark in written exam.
Elearning is a promising approach in UPCE and well-accepted by medical students. It may be able to increase students’ knowledge in palliative care. However, it is likely that there are other approaches needed to change students’ self-estimation in palliative care competencies. It seems plausible that experience-based learning and encounters with dying patients and their relatives are required to increases students’ self-estimation in palliative care competencies.
via Can elearning be used to teach palliative care? – medical students’ acceptance, knowledge, and self-estimation of competence in palliative care after elearning | BMC Medical Education | Full Text.
Background: The progressive use of e-learning in postgraduate medical education calls for useful quality indicators. Many evaluation tools exist. However, these are diversely used and their empirical foundation is often lacking.
Objective: We aimed to identify an empirically founded set of quality indicators to set the bar for “good enough” e-learning.
Methods: We performed a Delphi procedure with a group of 13 international education experts and 10 experienced users of e-learning. The questionnaire started with 57 items. These items were the result of a previous literature review and focus group study performed with experts and users. Consensus was met when a rate of agreement of more than two-thirds was achieved.
Results: In the first round, the participants accepted 37 items of the 57 as important, reached no consensus on 20, and added 15 new items. In the second round, we added the comments from the first round to the items on which there was no consensus and added the 15 new items. After this round, a total of 72 items were addressed and, of these, 37 items were accepted and 34 were rejected due to lack of consensus.
Conclusions: This study produced a list of 37 items that can form the basis of an evaluation tool to evaluate postgraduate medical e-learning. This is, to our knowledge, the first time that quality indicators for postgraduate medical e-learning have been defined and validated. The next step is to create and validate an e-learning evaluation tool from these items.
E-learning—the use of Internet technologies to enhance knowledge and performance—has become a widely accepted instructional approach. Little is known about the current use of e-learning in postgraduate medical education. To determine utilization of e-learning by United States internal medicine residency programs, program director (PD) perceptions of e-learning, and associations between e-learning use and residency program characteristics.
We conducted a national survey in collaboration with the Association of Program Directors in Internal Medicine of all United States internal medicine residency programs.
Of the 368 PDs, 214 (58.2%) completed the e-learning survey. Use of synchronous e-learning at least sometimes, somewhat often, or very often was reported by 85 (39.7%); 153 programs (71.5%) use asynchronous e-learning at least sometimes, somewhat often, or very often. Most programs (168; 79%) do not have a budget to integrate e-learning. Mean (SD) scores for the PD perceptions of e-learning ranged from 3.01 (0.94) to 3.86 (0.72) on a 5-point scale. The odds of synchronous e-learning use were higher in programs with a budget for its implementation (odds ratio, 3.0 [95% CI, 1.04–8.7]; P = .04).
Residency programs could be better resourced to integrate e-learning technologies. Asynchronous e-learning was used more than synchronous, which may be to accommodate busy resident schedules and duty-hour restrictions. PD perceptions of e-learning are relatively moderate and future research should determine whether PD reluctance to adopt e-learning is based on unawareness of the evidence, perceptions that e-learning is expensive, or judgments about value versus effectiveness.
ABSTRACT: A Video-Based Coaching Intervention to Improve Surgical Skill in Fourth-Year Medical Students
For senior medical students pursuing careers in surgery, specific technical feedback is critical for developing foundational skills in preparation for residency. This pilot study seeks to assess the feasibility of a video-based coaching intervention to improve the suturing skills of fourth-year medical students.
Fourth-year medical students pursuing careers in surgery were randomized to intervention vs. control groups and completed 2 video recorded suture tasks. Students in the intervention group received a structured coaching session between consecutive suturing tasks, whereas students in the control group did not. Each coaching session consisted of a video review of the students’ first suture task with a faculty member that provided directed feedback regarding technique. Following each suturing task, students were asked to self-assess their performance and provide feedback regarding the utility of the coaching session. All videos were deidentified and graded by independent faculty members for evaluation of suture technique.
The University of Michigan Medical School in Ann Arbor, Michigan.
All fourth-year medical students pursuing careers in surgical specialties were contacted via e-mail for voluntary participation. In all, 16 students completed both baseline and follow up suture tasks.
All students who completed the coaching session would definitely recommend the session for other students. A total of 94% of the students strongly agreed that the exercise was a beneficial experience, and 75% strongly agreed that it improved their technical skills. Based on faculty grading, students in the intervention group demonstrated greater average improvements in bimanual dexterity compared to students in the control group; whereas students in the control group demonstrated greater average improvements in domains of efficiency and tissue handling compared to the intervention group. Based on student self-assessments, those in the intervention group had greater subjective improvements in all scored domains of bimanual dexterity, efficiency, tissue handling, and consistency compared to the control group. Subjective, free-response comments centered on themes of becoming more aware of hand movements when viewing their suturing from a new perspective, and the usefulness of the coaching advice.
This pilot study demonstrates the feasibility of a video-based coaching intervention for senior medical students. Students who participated in the coaching arm of the intervention noticed improvements in all domains of technical skill and noted that the experience was overwhelmingly positive. In summary, video-based review shows promise as an educational tool in medical education as a means to provide specific technical feedback.