Exam Development

Job and Task Analysis

The Resilience-Building Leader Program certifications are founded on original research conducted by Dr. Gene Coughlin to determine the leader strategies and practices for Building Resilient Teams. The leader tasks and supporting knowledge and skills identified by this research are organized into the following competency domains: Create a Positive Climate, Develop Cohesion, and Provide Purpose. Collectively, these three competency domains are the basis for the Resilience-Building Leadership Professional (RBLP) certification.

Building on this foundational research, the Resilience-Building Leader Program convened a panel of subject matter experts (SMEs) to conduct job and task analysis on the leader’s role in a learning organization. The leader tasks and supporting knowledge and skills identified by this role delineation study are organized into the following competency domains: Facilitate Team Learning and Support Organizational Learning. The Facilitate Team Learning competency domain is the basis for the Resilience-Building Leadership Professional Coach (RBLP-C) certification. The Lead Organizational Learning competency domain is the basis for the Resilience-Building Leadership Professional Trainer (RBLP-T) certification.

Exam Validity

The validation of a certification exam depends on content related evidence. To be valid, exam questions must adequately represent the competency domains being considered. A panel of SMEs developed and mapped exam items against the applicable competency domains to ensure that each leader task is adequately represented, and an appropriate number of questions are in place for a valid examination.

Verifying the appropriateness of exam cut scores is another critical element of the validation process. All standard setting methods for certification exams involve some degree of subjectivity. The goal for a credentialing body is to reduce that objectivity as much as possible. Cut score validation ensures that the standard for passing is based on empirical data and makes an appropriate distinction between adequate and inadequate performance.

The Angoff method was used to validate the interpretation of the exam cut scores. The SMEs individually rated each exam item based on whether a minimally qualified candidate would answer the item correctly. The results of these individual ratings were shared with the entire panel so each SME could compare his or her ratings to the ratings of the other SMEs. A discussion of those items that exhibited the greatest discrepancy was facilitated. Following the comparisons and discussion, the panel of SMEs conducted a second round of individual ratings. The second round of ratings was averaged to determine the final cut score for each exam.

Exam Reliability

The precision and accuracy of exam results determines the level of confidence in the certification decisions. Standardization enhances the consistency and fairness of exam scoring. There must be enough standardization to compare candidates, but also enough flexibility to allow evaluators to tailor the exam to each candidate. Oral exams and scoring are structured so that differences among candidates can be described in a standard and consistent way.

The role of the certification exam evaluator is to issue, assess, and score the exam. Resilience-Building Leader Program exam evaluators receive training and calibration on the use of the scoring rubric. The training includes practice with exam delivery and scoring. The purpose of the training is to ensure that the exam evaluators share a common understanding of each exam item and the exam rating categories. Exam evaluator calibration is achieved by reviewing and discussing recorded sample exams to develop consistency.

Pilot testing was conducted to ensure that the certification exams demonstrated acceptable psychometric properties. Pilot participants came from multiple regions of the country and included representation from various industries and occupations. Statistical analyses were conducted. The certification exam scoring rubrics were found to be highly reliable, as evidenced by initial inter-rater reliability coefficients.

To to ensure the quality and consistency of the certification exam rating process, 20% of certification exams are assessed independently by a second evaluator to ensure consistency. Rating anomalies noted by lead evaluators during the review process are identified and resolved immediately. Calibration exercises are conducted routinely with all exam evaluators who actively assess Resilience-Building Leader Program certification exams.

The certification exams, scoring rubrics, scoring sheets, and all other testing or test-related materials remain the sole and exclusive property of the Resilience-Building Leader Program. These materials are confidential and are not available for review by any person or agency for any reason.