MENUCLOSE

 

Connect with us

Author: Brian S McGowan, PhD

ABSTRACT: Education in Sepsis: A Review for the Clinician of What Works, for Whom, and in What Circumstances

Sepsis is a major cause of morbidity and mortality in both the general and obstetric populations. Concerns have been raised regarding some cases of substandard care in the management of the septic and there is a real need for continuing multidisciplinary medical education in the recognition and management of the pregnant patient experiencing sepsis. This review aims to summarize studies on medical education in sepsis to both inform clinicians working in obstetrics and gynaecology and to assist in planning educational programs.

via Education in Sepsis: A Review for the Clinician of What Works, for Whom, and in What Circumstances. – PubMed – NCBI.

ABSTRACT: The Use of the Delphi and Other Consensus Group Methods in Medical Education Research: A Review

PURPOSE:
Consensus group methods, such as the Delphi method and nominal group technique (NGT), are used to synthesize expert opinions when evidence is lacking. Despite their extensive use, these methods are inconsistently applied. Their use in medical education research has not been well studied. The authors set out to describe the use of consensus methods in medical education research and to assess the reporting quality of these methods and results.
METHOD:
Using scoping review methods, the authors searched the Medline, Embase, PsycInfo, PubMed, Scopus, and ERIC databases for 2009-2016. Full-text articles that focused on medical education and the keywords Delphi, RAND, NGT, or other consensus group methods were included. A standardized extraction form was used to collect article demographic data and features reflecting methodological rigor.
RESULTS:
Of the articles reviewed, 257 met the inclusion criteria. The Modified Delphi (105/257; 40.8%), Delphi (91/257; 35.4%), and NGT (23/257; 8.9%) methods were most often used. The most common study purpose was curriculum development or reform (68/257; 26.5%), assessment tool development (55/257; 21.4%), and defining competencies (43/257; 16.7%). The reporting quality varied, with 70.0% (180/257) of articles reporting a literature review, 27.2% (70/257) reporting what background information was provided to participants, 66.1% (170/257) describing the number of participants, 40.1% (103/257) reporting if private decisions were collected, 37.7% (97/257) reporting if formal feedback of group ratings was shared, and 43.2% (111/257) defining consensus a priori.
CONCLUSIONS:
Consensus methods are poorly standardized and inconsistently used in medical education research. Improved criteria for reporting are needed.

via The Use of the Delphi and Other Consensus Group Methods in Medical Education Research: A Review. – PubMed – NCBI.

ABSTRACT: Are You Sure You Want to Do That? Fostering the Responsible Conduct of Medical Education Research

Engaging in questionable research practices (QRPs) is a noted problem across many disciplines, including medical education. While QRPs are rarely discussed in the context of medical education, that does not mean that medical education researchers are immune. Therefore, the authors seek to raise medical educators’ awareness of the responsible conduct of research (RCR) and call the community to action before QRPs negatively affect the field.The authors define QRPs and introduce examples that could easily happen in medical education research because of vulnerabilities particular to the field. The authors suggest that efforts in research, including medical education research, should focus on facilitating a change in the culture of research to foster RCR, and that these efforts should make explicit both the individual and system factors that ultimately influence researcher behavior. They propose a set of approaches within medical education training initiatives to foster such a culture: empowering research mentors as role models, open airing of research conduct dilemmas and infractions, protecting whistle blowers, establishing mechanisms for facilitating responsibly conducted research, and rewarding responsible researchers.The authors recommend that efforts at culture change be focused on the growing graduate programs, fellowships, and faculty academies in medical education to ensure that RCR training is an integral component for both students and faculty. They encourage medical education researchers to think creatively about solutions to the challenges they face and to act together as an international community to avoid wasting research efforts, damaging careers, and stunting medical education research through QRPs.

via Are You Sure You Want to Do That? Fostering the Responsible Conduct of Medical Education Research. – PubMed – NCBI.

ABSTRACT: Evaluating a technology supported interactive response system during the laboratory section of a histology course

Monitoring of student learning through systematic formative assessment is important for adjusting pedagogical strategies. However, traditional formative assessments, such as quizzes and written assignments, may not be sufficiently timely for making adjustments to a learning process. Technology supported formative assessment tools assess student knowledge, allow for immediate feedback, facilitate classroom dialogues, and have the potential to modify student learning strategies. As an attempt to integrate technology supported formative assessment in the laboratory section of an upper-level histology course, the interactive application Learning CatalyticsTM , a cloud-based assessment system, was used. This study conducted during the 2015 Histology courses at Cornell University concluded that this application is helpful for identifying student misconceptions “on-the-go,” engaging otherwise marginalized students, and forming a new communication venue between students and instructors. There was no overall difference between grades from topics that used the application and grades from those that did not, and students reported that it only slightly helped improve their understanding of the topic (3.8 ± 0.99 on a five-point Likert scale). However, they highly recommended using it (4.2 ± 0.71). The major limitation was regarding the image display and graphical resolution of this application. Even though students embrace the use of technology, 39% reported benefits of having the traditional light microscope available. This cohort of students led instructors to conclude that the newest tools are not always better, but rather can complement traditional instruction methods.

via Evaluating a technology supported interactive response system during the laboratory section of a histology course. – PubMed – NCBI.

MANUSCRIPT: Effectiveness of Adaptive E-Learning Environments on Knowledge, Competence, and Behavior in Health Professionals and Students

BACKGROUND:
Adaptive e-learning environments (AEEs) can provide tailored instruction by adapting content, navigation, presentation, multimedia, and tools to each user’s navigation behavior, individual objectives, knowledge, and preferences. AEEs can have various levels of complexity, ranging from systems using a simple adaptive functionality to systems using artificial intelligence. While AEEs are promising, their effectiveness for the education of health professionals and health professions students remains unclear.
OBJECTIVE:
The purpose of this systematic review is to assess the effectiveness of AEEs in improving knowledge, competence, and behavior in health professionals and students.
METHODS:
We will follow the Cochrane Collaboration and the Effective Practice and Organisation of Care (EPOC) Group guidelines on systematic review methodology. A systematic search of the literature will be conducted in 6 bibliographic databases (CINAHL, EMBASE, ERIC, PsycINFO, PubMed, and Web of Science) using the concepts “adaptive e-learning environments,” “health professionals/students,” and “effects on knowledge/skills/behavior.” We will include randomized and nonrandomized controlled trials, in addition to controlled before-after, interrupted time series, and repeated measures studies published between 2005 and 2017. The title and the abstract of each study followed by a full-text assessment of potentially eligible studies will be independently screened by 2 review authors. Using the EPOC extraction form, 1 review author will conduct data extraction and a second author will validate the data extraction. The methodological quality of included studies will be independently assessed by 2 review authors using the EPOC risk of bias criteria. Included studies will be synthesized by a descriptive analysis. Where appropriate, data will be pooled using meta-analysis by applying the RevMan software version 5.1, considering the heterogeneity of studies.
RESULTS:
The review is in progress. We plan to submit the results in the beginning of 2018.
CONCLUSION:
Providing tailored instruction to health professionals and students is a priority in order to optimize learning and clinical outcomes. This systematic review will synthesize the best available evidence regarding the effectiveness of AEEs in improving knowledge, competence, and behavior in health professionals and students. It will provide guidance to policy makers, hospital managers, and researchers in terms of AEE development, implementation, and evaluation in health care.

via Effectiveness of Adaptive E-Learning Environments on Knowledge, Competence, and Behavior in Health Professionals and Students: Protocol for a Syste… – PubMed – NCBI.

ABSTRACT: Departing from PowerPoint default mode: Applying Mayer’s multimedia principles for enhanced learning of parasitology

PURPOSE:
PowerPoint (PPT™) presentation has become an integral part of day-to-day teaching in medicine. Most often, PPT™ is used in its default mode which in fact, is known to cause boredom and ineffective learning. Research has shown improved short-term memory by applying multimedia principles for designing and delivering lectures. However, such evidence in medical education is scarce. Therefore, we attempted to evaluate the effect of multimedia principles on enhanced learning of parasitology.
METHODOLOGY:
Second-year medical students received a series of lectures, half of the lectures used traditionally designed PPT™ and the rest used slides designed by Mayer’s multimedia principles. Students answered pre and post-tests at the end of each lecture (test-I) and an essay test after six months (test-II) which assessed their short and long term knowledge retention respectively. Students’ feedback on quality and content of lectures were collected.
RESULTS:
Statistically significant difference was found between post test scores of traditional and modified lectures (P = 0.019) indicating, improved short-term memory after modified lectures. Similarly, students scored better in test II on the contents learnt through modified lectures indicating, enhanced comprehension and improved long-term memory (P < 0.001). Many students appreciated learning through multimedia designed PPT™ and suggested for their continued use.
CONCLUSIONS:
It is time to depart from default PPT™ and adopt multimedia principles to enhance comprehension and improve short and long term knowledge retention. Further, medical educators may be trained and encouraged to apply multimedia principles for designing and delivering effective lectures.

via Departing from PowerPoint default mode: Applying Mayer’s multimedia principles for enhanced learning of parasitology. – PubMed – NCBI.

ABSTRACT: Validation of a Teaching Effectiveness Assessment in Psychiatry Continuing Medical Education

OBJECTIVE:
Little is known about factors associated with effective continuing medical education (CME) in psychiatry. The authors aimed to validate a method to assess psychiatry CME teaching effectiveness and to determine associations between teaching effectiveness scores and characteristics of presentations, presenters, and participants.
METHODS:
This cross-sectional study was conducted at the Mayo Clinic Psychiatry Clinical Reviews and Psychiatry in Medical Settings. Presentations were evaluated using an eight-item CME teaching effectiveness instrument, its content based on previously published instruments. Factor analysis, internal consistency and interrater reliabilities, and temporal stability reliability were calculated. Associations were determined between teaching effectiveness scores and characteristics of presentations, presenters, and participants.
RESULTS:
In total, 364 participants returned 246 completed surveys (response rate, 67.6%). Factor analysis revealed a unidimensional model of psychiatry CME teaching effectiveness. Cronbach α for the instrument was excellent at 0.94. Item mean score (SD) ranged from 4.33 (0.92) to 4.71 (0.59) on a 5-point scale. Overall interrater reliability was 0.84 (95% CI, 0.75-0.91), and temporal stability was 0.89 (95% CI, 0.77-0.97). No associations were found between teaching effectiveness scores and characteristics of presentations, presenters, and participants.
CONCLUSIONS:
This study provides a new, validated measure of CME teaching effectiveness that could be used to improve psychiatry CME. In contrast to prior research in other medical specialties, CME teaching effectiveness scores were not associated with use of case-based or interactive presentations. This outcome suggests the need for distinctive considerations regarding psychiatry CME; a singular approach to CME teaching may not apply to all medical specialties

via Validation of a Teaching Effectiveness Assessment in Psychiatry Continuing Medical Education. – PubMed – NCBI.

ABSTRACT: Associations between teaching effectiveness scores and characteristics of presentations in hospital medicine continuing education

BACKGROUND:
There is little research regarding characteristics of effective continuing medical education (CME) presentations in hospital medicine (HM). Therefore, we sought to identify associations between validated CME teaching effectiveness scores and characteristics of CME presentations in the field of HM.
DESIGN/SETTING:
This was a cross-sectional study of participants and didactic presentations from a national HM CME course in 2014.
MEASUREMENTS:
Participants provided CME teaching effectiveness (CMETE) ratings using an instrument with known validity evidence. Overall CMETE scores (5-point scale: 1 = strongly disagree; 5 = strongly agree) were averaged for each presentation, and associations between scores and presentation characteristics were determined using the Kruskal-Wallis test. The threshold for statistical significance was set at P < 0.05.
RESULTS:
A total of 277 out of 368 participants (75.3%) completed evaluations for the 32 presentations. CMETE scores (mean [standard deviation]) were significantly associated with the use of audience response (4.64 [0.16]) versus no audience response (4.49 [0.16]; P = 0.01), longer presentations (≥30 minutes: 4.67 [0.13] vs <30 minutes: 4.51 [0.18]; P = 0.02), and larger number of slides (≥50: 4.66 [0.17] vs <50: 4.55 [0.17]; P = 0.04). There were no significant associations between CMETE scores and use of clinical cases, defined goals, or summary slides.
CONCLUSIONS:
To our knowledge, this is the first study regarding associations between validated teaching effectiveness scores and characteristics of effective CME presentations in HM. Our findings, which support previous research in other fields, indicate that CME presentations may be improved by increasing interactivity through the use of audience response systems and allowing longer presentations.

via Associations between teaching effectiveness scores and characteristics of presentations in hospital medicine continuing education. – PubMed – NCBI.

MANUSCRIPT: Improving Participant Feedback to Continuing Medical Education Presenters in Internal Medicine: A Mixed Methods Study

Evaluation and feedback are uniquely different: evaluation is summative and involves judgment, whereas feedback is formative and specifically intended to improve effectiveness.7,8 It is understood that useful feedback is provided in a timely fashion, behavior-specific, and balanced with both positive and constructive elements.7 Behavior-specific feedback is important because, unlike vague or judgmental comments, it identifies tangible actions for learners to improve upon. Feedback that is balanced (e.g., containing both positive and constructive elements) is particularly useful for poor performers, because it makes the overall feedback more acceptable, thus allowing learners to reflect more comfortably on the constructive feedback component. Reflection on feedback is important, because it has been observed that reflection is the critical link between receiving and using assessment feedback.2 Unfortunately, the feedback provided to CME presenters often lacks mention of specific behaviors, thus providing presenters with no means for improvement.

The access the article click here!

 

There has never been a classroom…

It’s been said that, “There has never been a classroom better than its teacher!” in short, I couldn’t agree more!

We have all been in the role of learner, sitting in a lecture or workshop, or participating in some virtual or on-demand learning activity. The classroom may be literal or figurative…but we are excited by the topic, the objectives, the opportunity…the learner settles in and the speaker (or teacher or facilitator) begins….and ugh…learning grinds to a halt.

Just this morning as I listened to one of my favorite podcasts this frustration smacked me right between the ears. I tuned in excited to hear about new research from an Ivy-league trained, fully tenured professor…it was a beautiful morning…the sun shone brightly….the birds chirped…and within seconds of the episode beginning I became distracted. In this case, the renowned, subject matter expert ended the vast majority of her sentences with ‘up speak’ – that thing where every sentence sounds like a question. I struggled to make it through…it was an inefficient learning experience to say the least. Was she uncertain in what she was telling me, was the data in question, what was she really trying to say? From what I could tell the content was everything I would have expected…but the experience was not.

Flashback to a little more than a week ago. I logged into a webinar with an expert in rheumatology exploring his new research…fascinating topic. I was highly motivated to learn more, to consume every last morsel…10 minutes later I was logging off. There are only so many times I can hear someone say ‘next slide’ or ‘on this slide what I wanted to say was that …yada, yada, yada.”

Tell me a story. Structure the content to make it consumable. Speak clearly. Mitigate the extraneous load of learning.

I was moved to write up these recent experiences because I have always been fascinated by how we conflate subject matter expertise or professional titles with the ability to create meaningful educational experiences…and teach.

As much as the educational community needs to fully embrace adult learning theory, instructional design hacks, and even the learning actions research….it seems illogical to apply all of this marvelous, practical research and then forget about the teacher…or to make assumptions about teaching competency.