The Rise of the Occupational Questionnaire

Occupational questionnaires replaced the old KSA process, but may have led to unintended consequences for agencies.

The 2010 Federal hiring reform eliminated essay-style questions from the initial application process. The intent was to reduce applicant burden, improve applicants’ experience with the hiring process, and make the process faster.

These are laudable goals but may have had unintended consequences for agencies’ ability to evaluate applicant qualifications.

The Old Hiring Method: KSAs

Prior to the reform, agencies rated and ranked applicants largely based on written descriptions applicants provided of their knowledge, skills, and abilities (KSA) in specific job-related areas.

For instance, a popular KSA asked the applicant to “describe your ability to communicate effectively in writing.”

Most often, applicants were required to write several narratives for each application, making the process extremely time consuming—especially when compared to private sector practices.

Although these ratings often were not rigorously validated assessments, some believed that they provided valuable information for hiring managers while discouraging unqualified or casual applicants from applying. Others argued that the KSAs were so burdensome that they discouraged the best applicants from applying.

Occupational Questionnaires Replace Old KSA Process

To replace the old KSA process, agencies began relying on occupational questionnaires to evaluate applicant qualifications.

Occupational questionnaires are typically a series of multiple choice questions that attempt to determine whether applicants meet the eligibility requirements for the job and to rate and rank applicants’ skills. These assessments typically ask applicants to rate their own level of expertise in specific areas. 

Current Occupational Questionnaires

A typical eligibility question might read: 

Choose one answer that best describes your experience:

  • I possess at least 1 year of specialized experience equivalent to the GS-13 grade level performing work related to the duties of the position described in the job announcement
  • I do not meet the requirement as described above

A typical question to determine relative abilities might read: 

What best describes your level of proficiency in processing, manipulating, and analyzing large data sets?

  • I have not worked with such data sets
  • I have worked with these kinds of data sets under the direction of someone more experienced
  • I have worked with such data sets independently with minimal supervision

Because these assessments focus on self-reported evaluations, they are less accurate than assessments designed to more directly measure expertise, such as job tests or simulations.

Furthermore, when conducting interviews for our perspectives brief Improving Federal Hiring Through Better Assessment, agency representatives expressed concerns that applicants are rating themselves as experts in every category because they have learned that is the only way they will make it to the next phase of the hiring process.

These types of inflated ratings negatively affect the agency’s ability to make valid distinctions among candidates if sufficient controls are not in place to validate the self-reported ratings. Many agencies just do not have the resources to commit to that validation effort— especially with the rise in the number of applications they have been receiving since the application process has been streamlined. 

Some agencies are striving to improve the quality of the occupational questionnaires they use.

For instance, the Defense Logistics Agency reported revamping its questionnaires to move from default scales whereby everyone rated themselves at the expert level to customized responses that are based on expertise benchmark levels. The Office of Personnel Management (OPM) is also providing training to help agencies improve the development of good, benchmarked questionnaires.

However, developing good benchmarks is not an easy task and will take additional skill and expertise from human resource and assessment staffs.

A number of agencies also reported pairing occupational questionnaires with other assessments that have higher validity, like structured interviews and reference checks. 

Laura Shugrue is a Senior Research Analyst with the U.S. Merit Systems Protection Board and has worked in the Federal human resources field for over 20 years.

This column was originally published in the U.S. Merit System Protection Board’s newsletter, Issues of Merit, and has been re-posted here with permission from the author. Visit http://www.mspb.gov/studies to read more of MSPB’s newsletters and studies on topics related to Federal human capital management, particularly findings and recommendations from their independent research.