Case Study
Preparing for Accreditation: Federal Cyber-Security Certification Exam
The Need: Conduct test validation and standard-setting of four (4) cyber security credential exams in preparation for American National Standards Institute (ANSI) accreditation.
The Solution: Performance of test validation and the determination of an exam cut score/passing score for each one of the Federal IT Security Professional (FITSP) role exam following a hybrid standard-setting approach using a custom-developed solution to facilitate item review and scoring analysis.
The Result:
- Validated 150-item exam
- Generation of a shortened 100-item exam form
- Formal establishment of a passing score or minimum competency standard for each exam.
The ANSI accreditation mark is a symbol of excellence, recognized worldwide by employers, industry leaders, hiring managers, and credential holders. The ANSI mark signals to an employer that the credential holder has undergone a valid, fair, and reliable assessment to verify that s/he has the necessary competencies to practice. ANSI accreditation provides an added layer of legal defensibility against invalid claims.
The Federal IT Security Institute (FITSI) is a non-profit organization managing and
administering the FITSP certification program. FITSP – Federal IT Security Professional is an IT security certification program targeted at the Federal workforce (civilian personnel, military and contractors). The program synergizes the knowledge of other security certifications with the standards and practices being used by the United States Federal government. FITSP consists of four IT security roles: Manager, Designer, Operator and Auditor.
In seeking ANSI accreditation, FITSI sought Leverage Assessments to perform test validation and to establish a passing score or minimum competency standard for each role exam. To receive accreditation, cyber-security certification exams must provide evidence (validity) to support its effectiveness as a tool to predict successful performance in an IT cyber security role.
Test Validation
To begin, psychometric analyses were conducted for each role exam to determine whether the 150-item form was functioning reasonably. Descriptive statistics were computed to determine central tendency measures. Item analyses determined the difficulty and discrimination of items. Distractor analysis identified response distractors that were most effective, and the responses chosen rarely or never. Test performance analysis computed the reliability and standard error of measurement. A domain analysis ensured that items sufficiently sampled role objectives across the associated domains identified in the Job Task Analysis.
Standard Setting/Cut Score Study
The standard setting/cut score process followed a modified Angoff approach by employing the participation of Subject Matter Experts (SMEs) to determine a pass score. SMEs were required to be fluent in english, possess five years (5) of information security experience supporting federal agencies in a significant security role and possess the knowledge, skills, and abilities required of certificants to perform given responsibilities and tasks as related to the scope of practice.
SME meetings were conducted remotely, meetings included – remote focus groups, remote item writing, and analysis of ratings to determine cut/pass scores.
The standard setting for each role exam consisted of a panel of 6 SMEs. Across all four (4) role exams, 17 SMEs were included in the standard setting process. All SMEs were situated across the United States, however one SME was located in Asia for the duration of the meetings.
To cater to the needs of our client, efficiently use the available data, and maintain test development objectives-referenced’ procedures linked to domain concentrations identified in the Job Task Analysis – we followed a unique approach to determining a passing score.
Brief outline of our hybrid standard-setting approach
Pilot administration: Client collected candidate performance data.
Determination of mean pilot score
SME administration: exam administered to SME panel
SMEs define ‘minimally qualified candidate’
Round 1 Estimations: Independent SME ratings
Round 2 Estimations: Quality check SME ratings
Determination of Angoff cut score
Through the integration of multiple perspectives; pilot candidate performance data, subject matter experts ratings, and psychometric analysis review – we documented and obtained numerous lines of evidence and followed established guidelines to ensure the validity and reliability of the exam.
Item Elimination: Based on our ‘Test Validation’ that identified items which required further review and revisions based on statistical estimates, a decision was made to reduce the number of exam items from 150 to 100. Fifty (50) items were marked as unscored and eliminated based on the below criteria:
Step 1 – Reference source support
Step 2 – Discrimination range: Point biserial index
Step 3 – Difficulty Range: P-value
Generation of a shortened 100-item exam form
The Beuk Compromise: determination of a passing score
The standard setting/cut score process is designed to transform subjective data into fair and standard scores which can be used to objectively delineate between candidates who are qualified and those who are not. The Beuk strives to find a compromise between perceived difficulty (SME Angoff scores) and observed difficulty (pilot scores).
Learn more about our services
Leverage Assessments
2501 Grand Concourse 3rd Floor
Bronx, NY, 10468
info@leverageassessments.com
+1 (646) 653-9329