TY - JOUR
T1 - 'What would my peers say?' Comparing the opinion-based method with the prediction-based method in Continuing Medical Education course evaluation.
AU - Chua, Jamie S
AU - van Diepen, Merel
AU - Trietsch, Marjolijn D
AU - Dekker, Friedo W
AU - Schönrock-Adema, Johanna
AU - Bustraan, Jacqueline
N1 - © 2024 Chua, van Diepen, Trietsch, Dekker, Schönrock-Adema, Bustraan; licensee Synergies Partners.
PY - 2024/7
Y1 - 2024/7
N2 - Background: Although medical courses are frequently evaluated via surveys with Likert scales ranging from “strongly agree” to “strongly disagree,” low response rates limit their utility. In undergraduate medical education, a new method with students predicting what their peers would say, required fewer respondents to obtain similar results. However, this prediction-based method lacks validation for continuing medical education (CME), which typically targets a more heterogeneous group than medical students. Methods: In this study, 597 participants of a large CME course were randomly assigned to either express personal opinions on a five-point Likert scale (opinion-based method; n = 300) or to predict the percentage of their peers choosing each Likert scale option (prediction-based method; n = 297). For each question, we calculated the minimum numbers of respondents needed for stable average results using an iterative algorithm. We compared mean scores and the distribution of scores between both methods. Results: The overall response rate was 47%. The prediction-based method required fewer respondents than the opinion-based method for similar average responses. Mean response scores were similar in both groups for most questions, but prediction-based outcomes resulted in fewer extreme responses (strongly agree/disagree). Conclusions: We validated the prediction-based method in evaluating CME. We also provide practical considerations for applying this method.
AB - Background: Although medical courses are frequently evaluated via surveys with Likert scales ranging from “strongly agree” to “strongly disagree,” low response rates limit their utility. In undergraduate medical education, a new method with students predicting what their peers would say, required fewer respondents to obtain similar results. However, this prediction-based method lacks validation for continuing medical education (CME), which typically targets a more heterogeneous group than medical students. Methods: In this study, 597 participants of a large CME course were randomly assigned to either express personal opinions on a five-point Likert scale (opinion-based method; n = 300) or to predict the percentage of their peers choosing each Likert scale option (prediction-based method; n = 297). For each question, we calculated the minimum numbers of respondents needed for stable average results using an iterative algorithm. We compared mean scores and the distribution of scores between both methods. Results: The overall response rate was 47%. The prediction-based method required fewer respondents than the opinion-based method for similar average responses. Mean response scores were similar in both groups for most questions, but prediction-based outcomes resulted in fewer extreme responses (strongly agree/disagree). Conclusions: We validated the prediction-based method in evaluating CME. We also provide practical considerations for applying this method.
KW - humans
KW - education, medical, continuing/methods
KW - peer group
KW - educational measurement/methods
KW - male
KW - female
KW - surveys and questionnaires
KW - students, medical/psychology
KW - adult
KW - mensen
KW - peer groepen
KW - volwassen mannen en vrouwen
KW - onderzoeken en vragenlijsten
KW - studenten geneeskunde
KW - studenten psychologie
U2 - 10.36834/cmej.77580
DO - 10.36834/cmej.77580
M3 - Article
C2 - 39114774
SN - 1923-1202
VL - 15
SP - 18
EP - 25
JO - Canadian medical education journal
JF - Canadian medical education journal
IS - 3
ER -