logotype

Exam Development


Job and Task Analysis

The Resilience-Building Leader Program convened a panel of subject matter experts (SMEs) in 2018 to conduct job and task analysis on the leader’s role in building and leading resilient teams in a learning organization. A resilient team is one that can overcome adversity, and then adapt and grow together because of that adversity. The leader tasks and supporting knowledge and skills identified by this role delineation study are organized into the following competency domains: Create a Positive ClimateDevelop CohesionProvide PurposeFacilitate Team Learning, and Support Organizational Learning. We convened panels of SMEs in 2020 and 2022 to review and revise the competency domains, leaders tasks, and supporting knowledge and skills. The next job and task analysis review will be conducted in 2024.


Exam Validity

The validation of a certification exam depends on content-related evidence. To be valid, exam questions must adequately represent the competency domains being considered. SMEs develop and map exam items against applicable competency domains to ensure that each task is adequately represented, and an appropriate number of questions are in place for a valid examination.

We conducted initial pilot testing in 2018 to ensure that our certification exams demonstrated acceptable psychometric properties. We recruited pilot participants from multiple regions of the country and included representation from various industries and occupations. We conducted statistical analyses and found the certification exam scoring rubrics to be highly reliable, as evidenced by initial inter-rater reliability coefficients.

Verifying the appropriateness of exam cut scores is another critical element of the validation process. All standard-setting methods for certification exams involve some degree of subjectivity. The goal for a credentialing body is to reduce that objectivity as much as possible. Cut score validation ensures that the standard for passing is based on empirical data and makes an appropriate distinction between adequate and inadequate performance.

We use the Angoff method to validate the interpretation of exam cut scores. SMEs individually rate each exam question based on whether a minimally qualified candidate would answer the question correctly. The results of these individual ratings are shared so that each SME can compare his or her ratings to the ratings of the other SMEs. A discussion of those exam questions that exhibit the greatest discrepancy is facilitated. Following the comparisons and discussion, the panel of SMEs conducts a second round of individual ratings. The second round of ratings is averaged to determine the final cut score for each exam.


Exam Reliability

The precision and accuracy of exam results determine the level of confidence in the certification decisions. Standardization enhances the consistency and fairness of exam scoring. There must be enough standardization to compare candidates, but also enough flexibility to allow evaluators to tailor the exam to each candidate. Oral exams and scoring are structured so that differences among candidates can be described in a standard and consistent way.

Our certification exams assess candidates in multiple rating categories. The use of multiple categories improves score precision by decreasing the error of measurement. Rating in multiple categories provides increased decision reliability and confidence in outcomes. The rating categories are structured around the knowledge dimensions and cognitive processes identified in Bloom’s revised taxonomy (A taxonomy for learning, teaching, and assessing: A revision of Bloom’s Taxonomy of Educational Objectives, 2001).

The role of the certification exam evaluator is to issue, assess, and score the exam. Resilience-Building Leader Program exam evaluators receive training and calibration on the use of the scoring rubric. The training includes practice with exam delivery and scoring. The purpose of the training is to ensure that the exam evaluators share a common understanding of each exam item and the exam rating categories. Exam evaluator calibration is achieved by reviewing and discussing recorded sample exams to develop consistency.

To ensure the quality and consistency of the certification exam rating process, 20% of certification exams are assessed independently by a second evaluator to ensure consistency. Rating anomalies noted by lead evaluators during the review process are identified and resolved immediately. Calibration exercises are conducted routinely with all exam evaluators who actively assess Resilience-Building Leader Program certification exams.