MENUCLOSE

 

Connect with us

Author: Brian S McGowan, PhD

RESOURCE: Brain circuit helps us learn by watching others

It’s often said that experience is the best teacher, but the experiences of other people may be even better. If you saw a friend get chased by a neighborhood dog, for instance, you would learn to stay away from the dog without having to undergo that experience yourself.This kind of learning, known as observational learning, offers a major evolutionary advantage, says Kay Tye, an MIT associate professor of brain and cognitive sciences and a member of MIT’s Picower Institute for Learning and Memory.“So much of what we learn day-to-day is through observation,” she says. “Especially for something that is going to potentially hurt or kill you, you could imagine that the cost of learning it firsthand is very high. The ability to learn it through observation is extremely adaptive, and gives a major advantage for survival.”Tye and her colleagues at MIT have now identified the brain circuit that is required for this kind of learning. This circuit, which is distinct from the brain network used to learn from firsthand experiences, relies on input from a part of the brain responsible for interpreting social cues.Former MD/PhD student Stephen Allsop, along with Romy Wichmann, Fergil Mills, and Anthony Burgos-Robles co-led this study, which appears in the May 3 issue of Cell.

via Brain circuit helps us learn by watching others | MIT News.

CLASSIC: Association Between Funding and Quality of Published Medical Education Research

Context Methodological shortcomings in medical education research are often attributed to insufficient funding, yet an association between funding and study quality has not been established.

Objectives To develop and evaluate an instrument for measuring the quality of education research studies and to assess the relationship between funding and study quality.

Design, Setting, and Participants Internal consistency, interrater and intrarater reliability, and criterion validity were determined for a 10-item medical education research study quality instrument (MERSQI). This was applied to 210 medical education research studies published in 13 peer-reviewed journals between September 1, 2002, and December 31, 2003. The amount of funding obtained per study and the publication record of the first author were determined by survey.

Main Outcome Measures Study quality as measured by the MERSQI (potential maximum total score, 18; maximum domain score, 3), amount of funding per study, and previous publications by the first author.

Results The mean MERSQI score was 9.95 (SD, 2.34; range, 5-16). Mean domain scores were highest for data analysis (2.58) and lowest for validity (0.69). Intraclass correlation coefficient ranges for interrater and intrarater reliability were 0.72 to 0.98 and 0.78 to 0.998, respectively. Total MERSQI scores were associated with expert quality ratings (Spearman ρ, 0.73; 95% confidence interval [CI], 0.56-0.84; P < .001), 3-year citation rate (0.8 increase in score per 10 citations; 95% CI, 0.03-1.30; P = .003), and journal impact factor (1.0 increase in score per 6-unit increase in impact factor; 95% CI, 0.34-1.56; P = .003). In multivariate analysis, MERSQI scores were independently associated with study funding of $20 000 or more (0.95 increase in score; 95% CI, 0.22-1.86; P = .045) and previous medical education publications by the first author (1.07 increase in score per 20 publications; 95% CI, 0.15-2.23; P = .047).

Conclusion The quality of published medical education research is associated with study funding.

via Association Between Funding and Quality of Published Medical Education Research | Medical Education and Training | JAMA | JAMA Network.

RESOURCE: Students’ Approaches to Learning | John Biggs

In 1976, Swedish researchers Ference Marton and Roger Saljö demonstrated that students learn not what teachers think they should learn, but what students perceive the task to demand of them. Students using a ‘surface’ approach see a task as requiring specific answers to questions, so they rote learn bits and pieces; students using a ‘deep’ approach want to understand, so they focus on themes and main ideas.

My own take on this was to develop questionnaires assessing approaches to learning, the Learning Process Questionnaire (LPQ for school students) and the Study Process Questionnaire (SPQ for tertiary students) to assess students’ use of these approaches, with the addition of an ‘achieving’ approach, which students use to maximise grades. The following article summarises my work on this: ‘The role of metalearning in study processes’ (British Journal of Educational Psychology, 55, 185-212, 1985).

The Revised Study Process Questionnaire (R-SPQ-2F), uses only surface and deep motives and strategies, and with total approach scores. It, with explanatory article, can be downloaded free of charge and used for research purposes as long as it is acknowledged in the usual way. Please note that the R-SPQ-2F is designed to reflect students’ approaches to learning in their current teaching context, so it is an instrument to evaluate teaching rather than one that characterises students as “surface learners” or “deep learners”. The earlier instrument had been used also to label students (he is a surface learner and she is a deep learner) but I now think that that is inappropriate. I have had a lot of correspondence from researchers who want to use the instrument for labelling students, that is as an independent variable, but it should not be so used; it provides a set of dependent variables that may be used for assessing teaching.

via Students’ Approaches to Learning | John Biggs.

MANUSCRIPT: Can cognitive processes help explain the success of instructional techniques recommended by behavior analysts?

The fields of cognitive psychology and behavior analysis have undertaken separate investigations into effective learning strategies. These studies have led to several recommendations from both fields regarding teaching techniques that have been shown to enhance student performance. While cognitive psychology and behavior analysis have studied student performance independently from their different perspectives, the recommendations they make are remarkably similar. The lack of discussion between the two fields, despite these similarities, is surprising. The current paper seeks to remedy this oversight in two ways: first, by reviewing two techniques recommended by behavior analysts—guided notes and response cards—and comparing them to their counterparts in cognitive psychology that are potentially responsible for their effectiveness; and second, by outlining some other areas of overlap that could benefit from collaboration. By starting the discussion with the comparison of two specific recommendations for teaching techniques, we hope to galvanize a more extensive collaboration that will not only further the progression of both fields, but also extend the practical applications of the ensuing research.

via Can cognitive processes help explain the success of instructional techniques recommended by behavior analysts? | npj Science of Learning.

MANUSCRIPT: Can elearning be used to teach palliative care? – medical students’ acceptance, knowledge, and self-estimation of competence in palliative care after elearning

Background
Undergraduate palliative care education (UPCE) was mandatorily incorporated in medical education in Germany in 2009. Implementation of the new cross-sectional examination subject of palliative care (QB13) continues to be a major challenge for medical schools. It is clear that there is a need among students for more UPCE. On the other hand, there is a lack of teaching resources and patient availabilities for the practical lessons. Digital media and elearning might be one solution to this problem. The primary objective of this study is to evaluate the elearning course Palliative Care Basics, with regard to students’ acceptance of this teaching method and their performance in the written examination on the topic of palliative care. In addition, students’ self-estimation in competence in palliative care was assessed.

Methods
To investigate students’ acceptance of the elearning course Palliative Care Basics, we conducted a cross-sectional study that is appropriate for proof-of-concept evaluation. The sample consisted of three cohorts of medical students of Heinrich Heine University Dusseldorf (N = 670). The acceptance of the elearning approach was investigated by means of the standard evaluation of Heinrich Heine University. The effect of elearning on students’ self-estimation in palliative care competencies was measured by means of the German revised version of the Program in Palliative Care Education and Practice Questionnaire (PCEP-GR).

Results
The elearning course Palliative Care Basics was well-received by medical students. The data yielded no significant effects of the elearning course on students’ self-estimation in palliative care competencies. There was a trend of the elearning course having a positive effect on the mark in written exam.

Conclusions
Elearning is a promising approach in UPCE and well-accepted by medical students. It may be able to increase students’ knowledge in palliative care. However, it is likely that there are other approaches needed to change students’ self-estimation in palliative care competencies. It seems plausible that experience-based learning and encounters with dying patients and their relatives are required to increases students’ self-estimation in palliative care competencies.

via Can elearning be used to teach palliative care? – medical students’ acceptance, knowledge, and self-estimation of competence in palliative care after elearning | BMC Medical Education | Full Text.

MANUSCRIPT: Consensus on Quality Indicators of Postgraduate Medical E-Learning: Delphi Study

Background: The progressive use of e-learning in postgraduate medical education calls for useful quality indicators. Many evaluation tools exist. However, these are diversely used and their empirical foundation is often lacking.

Objective: We aimed to identify an empirically founded set of quality indicators to set the bar for “good enough” e-learning.

Methods: We performed a Delphi procedure with a group of 13 international education experts and 10 experienced users of e-learning. The questionnaire started with 57 items. These items were the result of a previous literature review and focus group study performed with experts and users. Consensus was met when a rate of agreement of more than two-thirds was achieved.

Results: In the first round, the participants accepted 37 items of the 57 as important, reached no consensus on 20, and added 15 new items. In the second round, we added the comments from the first round to the items on which there was no consensus and added the 15 new items. After this round, a total of 72 items were addressed and, of these, 37 items were accepted and 34 were rejected due to lack of consensus.

Conclusions: This study produced a list of 37 items that can form the basis of an evaluation tool to evaluate postgraduate medical e-learning. This is, to our knowledge, the first time that quality indicators for postgraduate medical e-learning have been defined and validated. The next step is to create and validate an e-learning evaluation tool from these items.

via JME-Consensus on Quality Indicators of Postgraduate Medical E-Learning: Delphi Study | de Leeuw | JMIR Medical Education.

MANUSCRIPT: E-learning in graduate medical education: survey of residency program directors

Background
E-learning—the use of Internet technologies to enhance knowledge and performance—has become a widely accepted instructional approach. Little is known about the current use of e-learning in postgraduate medical education. To determine utilization of e-learning by United States internal medicine residency programs, program director (PD) perceptions of e-learning, and associations between e-learning use and residency program characteristics.

Methods
We conducted a national survey in collaboration with the Association of Program Directors in Internal Medicine of all United States internal medicine residency programs.

Results
Of the 368 PDs, 214 (58.2%) completed the e-learning survey. Use of synchronous e-learning at least sometimes, somewhat often, or very often was reported by 85 (39.7%); 153 programs (71.5%) use asynchronous e-learning at least sometimes, somewhat often, or very often. Most programs (168; 79%) do not have a budget to integrate e-learning. Mean (SD) scores for the PD perceptions of e-learning ranged from 3.01 (0.94) to 3.86 (0.72) on a 5-point scale. The odds of synchronous e-learning use were higher in programs with a budget for its implementation (odds ratio, 3.0 [95% CI, 1.04–8.7]; P = .04).

Conclusions
Residency programs could be better resourced to integrate e-learning technologies. Asynchronous e-learning was used more than synchronous, which may be to accommodate busy resident schedules and duty-hour restrictions. PD perceptions of e-learning are relatively moderate and future research should determine whether PD reluctance to adopt e-learning is based on unawareness of the evidence, perceptions that e-learning is expensive, or judgments about value versus effectiveness.

via E-learning in graduate medical education: survey of residency program directors.

ABSTRACT: A Video-Based Coaching Intervention to Improve Surgical Skill in Fourth-Year Medical Students

OBJECTIVE:
For senior medical students pursuing careers in surgery, specific technical feedback is critical for developing foundational skills in preparation for residency. This pilot study seeks to assess the feasibility of a video-based coaching intervention to improve the suturing skills of fourth-year medical students.

DESIGN:
Fourth-year medical students pursuing careers in surgery were randomized to intervention vs. control groups and completed 2 video recorded suture tasks. Students in the intervention group received a structured coaching session between consecutive suturing tasks, whereas students in the control group did not. Each coaching session consisted of a video review of the students’ first suture task with a faculty member that provided directed feedback regarding technique. Following each suturing task, students were asked to self-assess their performance and provide feedback regarding the utility of the coaching session. All videos were deidentified and graded by independent faculty members for evaluation of suture technique.

SETTING:
The University of Michigan Medical School in Ann Arbor, Michigan.

PARTICIPANTS:
All fourth-year medical students pursuing careers in surgical specialties were contacted via e-mail for voluntary participation. In all, 16 students completed both baseline and follow up suture tasks.

RESULTS:
All students who completed the coaching session would definitely recommend the session for other students. A total of 94% of the students strongly agreed that the exercise was a beneficial experience, and 75% strongly agreed that it improved their technical skills. Based on faculty grading, students in the intervention group demonstrated greater average improvements in bimanual dexterity compared to students in the control group; whereas students in the control group demonstrated greater average improvements in domains of efficiency and tissue handling compared to the intervention group. Based on student self-assessments, those in the intervention group had greater subjective improvements in all scored domains of bimanual dexterity, efficiency, tissue handling, and consistency compared to the control group. Subjective, free-response comments centered on themes of becoming more aware of hand movements when viewing their suturing from a new perspective, and the usefulness of the coaching advice.

CONCLUSIONS:
This pilot study demonstrates the feasibility of a video-based coaching intervention for senior medical students. Students who participated in the coaching arm of the intervention noticed improvements in all domains of technical skill and noted that the experience was overwhelmingly positive. In summary, video-based review shows promise as an educational tool in medical education as a means to provide specific technical feedback.

via A Video-Based Coaching Intervention to Improve Surgical Skill in Fourth-Year Medical Students. – PubMed – NCBI.

ABSTRACT: Beyond Continuing Medical Education: Clinical Coaching as a Tool for Ongoing Professional Development

PROBLEM:
For most physicians, the period of official apprenticeship ends with the completion of residency or fellowship, yet the acquisition of expertise requires ongoing opportunities to practice a given skill and obtain structured feedback on one’s performance.

APPROACH:
In July 2013, the authors developed a clinical coaching pilot program to provide early-career hospitalists with feedback from a senior clinical advisor (SCA) at Massachusetts General Hospital. A Hospital Medicine Unit-wide retreat was held to help design the SCA role and obtain faculty buy-in. Twelve SCAs were recruited from hospitalists with more than five years of experience; each served as a clinical coach to 28 early-career hospitalists during the pilot. Clinical narratives and programmatic surveys were collected from SCAs and early-career hospitalists.

OUTCOMES:
Of 25 responding early-career hospitalists, 23 (92%) rated the SCA role as useful to very useful, 20 (80%) reported interactions with the SCA led to at least one change in their diagnostic approach, and 13 (52%) reported calling fewer subspecialty consults as a result of guidance from the SCA. In response to questions about professional development, 18 (72%) felt more comfortable as an independent physician following their interactions with the SCA, and 19 (76%) thought the interactions improved the quality of care they delivered.

NEXT STEPS:
To better understand the impact and generalizability of clinical coaching, a larger, longitudinal study is required to look at patient and provider outcomes in detail. Further refinement of the SCA role to meet faculty needs is needed and could include faculty development.

via Beyond Continuing Medical Education: Clinical Coaching as a Tool for Ongoing Professional Development. – PubMed – NCBI.

ABSTRACT: Virtual reality-based simulators for spine surgery: a systematic review

BACKGROUND CONTEXT:
Virtual reality (VR)-based simulators offer numerous benefits and are very useful in assessing and training surgical skills. Virtual reality-based simulators are standard in some surgical subspecialties, but their actual use in spinal surgery remains unclear. Currently, only technical reviews of VR-based simulators are available for spinal surgery.

PURPOSE:
Thus, we performed a systematic review that examined the existing research on VR-based simulators in spinal procedures. We also assessed the quality of current studies evaluating VR-based training in spinal surgery. Moreover, we wanted to provide a guide for future studies evaluating VR-based simulators in this field.

STUDY DESIGN AND SETTING:
This is a systematic review of the current scientific literature regarding VR-based simulation in spinal surgery.

METHODS:
Five data sources were systematically searched to identify relevant peer-reviewed articles regarding virtual, mixed, or augmented reality-based simulators in spinal surgery. A qualitative data synthesis was performed with particular attention to evaluation approaches and outcomes. Additionally, all included studies were appraised for their quality using the Medical Education Research Study Quality Instrument (MERSQI) tool.

RESULTS:
The initial review identified 476 abstracts and 63 full texts were then assessed by two reviewers. Finally, 19 studies that examined simulators for the following procedures were selected: pedicle screw placement, vertebroplasty, posterior cervical laminectomy and foraminotomy, lumbar puncture, facet joint injection, and spinal needle insertion and placement. These studies had a low-to-medium methodological quality with a MERSQI mean score of 11.47 out of 18 (standard deviation=1.81).

CONCLUSIONS:
This review described the current state and applications of VR-based simulator training and assessment approaches in spinal procedures. Limitations, strengths, and future advancements of VR-based simulators for training and assessment in spinal surgery were explored. Higher-quality studies with patient-related outcome measures are needed. To establish further adaptation of VR-based simulators in spinal surgery, future evaluations need to improve the study quality, apply long-term study designs, and examine non-technical skills, as well as multidisciplinary team training.

via Virtual reality-based simulators for spine surgery: a systematic review. – PubMed – NCBI.