The precision and accuracy of exam results determines the level of confidence in the certification decisions. Standardization enhances the consistency and fairness of exam scoring. There must be enough standardization to compare candidates, but also enough flexibility to allow evaluators to tailor the exam to each candidate. Oral exams and scoring are structured so that differences among candidates can be described in a standard and consistent way.
The role of the certification exam evaluator is to issue, assess, and score the exam. Resilience-Building Leader Program exam evaluators receive training and calibration on the use of the scoring rubric. The training includes practice with exam delivery and scoring. The purpose of the training is to ensure that the exam evaluators share a common understanding of each exam item and the exam rating categories. Exam evaluator calibration is achieved by reviewing and discussing recorded sample exams to develop consistency.
Pilot testing was conducted to ensure that the certification exams demonstrated acceptable psychometric properties. Pilot participants came from multiple regions of the country and included representation from various industries and occupations. Statistical analyses were conducted. The certification exam scoring rubrics were found to be highly reliable, as evidenced by initial inter-rater reliability coefficients.
In order to ensure the quality and consistency of the certification exam rating process, 50% of certification exams are assessed independently by a second evaluator to ensure consistency. Rating anomalies noted by lead evaluators during the review process are identified and resolved immediately. Calibration exercises are conducted routinely with all exam evaluators who actively assess Resilience-Building Leader Program certification exams.