TY - JOUR
T1 - Comparison study of judged clinical skills competence from standard setting ratings generated under different administration conditions
AU - Roberts, William L.
AU - Boulet, John
AU - Sandella, Jeanne
N1 - Generated from Scopus record by KAUST IRTS on 2023-09-20
PY - 2017/12/1
Y1 - 2017/12/1
N2 - When the safety of the public is at stake, it is particularly relevant for licensing and credentialing exam agencies to use defensible standard setting methods to categorize candidates into competence categories (e.g., pass/fail). The aim of this study was to gather evidence to support change to the Comprehensive Osteopathic Medical Licensing-USA Level 2-Performance Evaluation standard setting design and administrative process. Twenty-two video recordings of candidates assessed for clinical competence were randomly selected from the 2014–2015 Humanistic domain test score distribution ranging from the highest to lowest quintile of performance. Nineteen panelists convened at the same site to receive training and practice prior to generating judgments of qualified or not qualified performance to each of the twenty videos. At the end of training, one panel remained onsite to complete their judgments and the second panel was released and given 1 week to observe the same twenty videos and complete their judgments offsite. The two one-sided test procedure established equivalence between panel group means at the 0.05 confidence level, controlling for rater errors within each panel group. From a practical cost-effective and administrative resource perspective, results from this study suggest it is possible to diverge from typical panel groups, who are sequestered the entire time onsite, to larger numbers of panelists who can make their judgments offsite with little impact on judged samples of qualified performance. Standard setting designs having panelists train together and then allowing those to provide judgments yields equivalent ratings and, ultimately, similar cut scores.
AB - When the safety of the public is at stake, it is particularly relevant for licensing and credentialing exam agencies to use defensible standard setting methods to categorize candidates into competence categories (e.g., pass/fail). The aim of this study was to gather evidence to support change to the Comprehensive Osteopathic Medical Licensing-USA Level 2-Performance Evaluation standard setting design and administrative process. Twenty-two video recordings of candidates assessed for clinical competence were randomly selected from the 2014–2015 Humanistic domain test score distribution ranging from the highest to lowest quintile of performance. Nineteen panelists convened at the same site to receive training and practice prior to generating judgments of qualified or not qualified performance to each of the twenty videos. At the end of training, one panel remained onsite to complete their judgments and the second panel was released and given 1 week to observe the same twenty videos and complete their judgments offsite. The two one-sided test procedure established equivalence between panel group means at the 0.05 confidence level, controlling for rater errors within each panel group. From a practical cost-effective and administrative resource perspective, results from this study suggest it is possible to diverge from typical panel groups, who are sequestered the entire time onsite, to larger numbers of panelists who can make their judgments offsite with little impact on judged samples of qualified performance. Standard setting designs having panelists train together and then allowing those to provide judgments yields equivalent ratings and, ultimately, similar cut scores.
UR - http://link.springer.com/10.1007/s10459-017-9766-1
UR - http://www.scopus.com/inward/record.url?scp=85013447158&partnerID=8YFLogxK
U2 - 10.1007/s10459-017-9766-1
DO - 10.1007/s10459-017-9766-1
M3 - Article
SN - 1382-4996
VL - 22
SP - 1279
EP - 1292
JO - Advances in Health Sciences Education
JF - Advances in Health Sciences Education
IS - 5
ER -