>

Frequently Asked Questions

Use the index below to jump to a specific FAQ topic. At the end of each topic is a link to return to this index. When possible, links to more detailed treatments of issues in the Documents section are included.

CMQs Design Goals

What Does CMQ Do?

Why Choose CMQ Over O*NET or DOT?

Job Analysis and Organizational Structure Terminology

 


What is Common-Metric Analysis?

The goal of common-metric job analysis is to use a single profile of general work activities (GWAs) to describe all jobs, thereby providing a common "yardstick" that allows meaningful comparisons to be made between jobs that perform different tasks.

In contrast, a task-based job analysis is useful for HR functions that require a detailed, technologically specific description of work. For example, for developing detailed training programs, work-sample tests, licensing or promotional exams, etc. However, task-based job analysis is of little value when one needs to be able to make meaningful, level-sensitive comparisons between different jobs, or between the positions that share a job title.

Common-metric job analysis is designed for the numerous applications that require us to make meaningful cross-job comparisons, or to quantitatively describe work in terms of a job's scores on abstract work dimensions. Such uses include setting employee-selection standards, developing compensation systems, providing career guidance advice, providing career guidance advice developing job families, and deriving or applying "synthetic" or job-component validity (JCV) equations to link work activities to worker-trait requirements.

A key component of the common-metric approach is that a multiple-level description of work is provided, using GWAs to define more specific work activities, and more abstract work dimensions to describe jobs at a macro level.

This hierarchical approach to describing work is illustrated in the levels of work activity graphic. By taking a hierarchical view of work, the common-metric approach lets the practitioner match the level of work-descriptor specificity required in each specific situation.

What's The Problem?

A big problem facing practitioners is that past common-metric job analysis instruments have suffered from a range of what we find to be serious limitations regarding inadequate content coverage, imprecise rating scales, too-high reading level, and subjective, unverifiable items. These issues are discussed at length in:

  • The CMQ Research Monograph, which describes the initial development and design goals of CMQ.

  • Harvey, R. J. (1991). Job analysis. In M. D. Dunnette & L. Hough (Eds.), Handbook of industrial and organizational psychology (second edition). Palo Alto, CA: Consulting Psychologists Press.

  • Harvey, R. J. (2012). Chapter 8: Analyzing work analysis data. In Wilson, Bennet, Gibson, & Alliger (Eds.), The handbook of work analysis: The methods, systems, applications, and science of work measurement in organizations. Psychology Press/Routledge, Taylor and Francis Group (ISBN-10: 1848728700).

Regarding terminology, additional problems involving unnecessary confusion have been caused by those who advance what we find to be the mistaken view that the so-called "worker oriented" approach to job analysis first popularized in the 1960's by McCormick and colleagues — which we now label common-metric analysis — produces qualitatively different information describing work than the data provided by other job analysis methods. Some have even gone so far as to say that task- versus "worker-oriented" methods define totally different philosophies of job analysis.

How Does CMQ Help?

Regarding the psychometric limitations of earlier standardized job analysis instruments, CMQ offers practitioners major improvements in all of the areas of concern noted above:

  • Comprehensive content coverage: CMQ seeks to provide coverage of the work performed on both exempt and non-exempt jobs, from janitor to CEO.

  • Precise rating scales: CMQ rating scales are designed to make it easier for raters to describe the job, and for others to verify the accuracy of those ratings.

  • Lower reading level: CMQ items are written to have a much lower reading level than many earlier instruments.

  • Objective, verifiable items.: CMQ work descriptor items are written to be as behaviorally specific and objective as possible to make them easier to rate and later verify.

Regarding the terminological confusion caused by those who claim that "worker-oriented" job analysis methods collect qualitatively different information from other job analysis methods, as we discuss in detail in the FAQ general work activities and work dimensions topics, the simple fact is that job analysis methods primarily differ in terms of the degree of behavioral abstaction of the items they rate.

They don't describe qualitatively different things — they simply use a higher or lower degree of technological specificity to describe the activities workers perform on jobs.

Of course, as a practical matter, it is clearly helpful to divide this continuum of behavioral specificity into a taxonomy that identifies a small number of discrete categories of work-descriptor types. The taxonomy we prefer breaks the behavioral specificity continuum into five general categories. These categories correspond roughly to the different general types of job analysis data we collect or use when making personnel decisions:

  • Level 1: Traditional task statements of the type seen in Fine's Functional Job Analysis (FJA) and task inventories that offer a highly technologically-specific view of work.

  • Level 2: Detailed general work activity (GWA) items of the type used in CMQ that are less technologically detailed than the typical task statement, but are still objective and detailed enough to produce verifiable ratings.

  • Level 3: More abstract GWA items of the type typically seen in first-generation "worker oriented" instruments, many of which are arguably too abstract to allow for objective rating and independent verification (e.g., a single 5-point scale rating Decision Making or Overall Responsibility).

  • Level 4: First-order work dimension scores produced via factor analysis of the Level 2/3 items rated in common-metric instruments.

  • Level 5: Second-order work dimension scores produced via higher-order factor analysis of first-order (Level 4) factor scores, as well as the rationally derived Data-People-Things constructs from Fine's Functional Job Analysis (FJA).

Ultimately, taxonomies are judged on their usefulness, and it must be stressed that both the number and the location of the dividing lines used to form our 5-category view of work specificity are inherently arbitrary in nature. Although we find the 5-category system noted above to be useful, we must stress that no clear lines-of-demarcation exist to indicate precisely where, for example, Level 1 task-type items stop and Level 2 detailed-GWA items begin.

One of the great benefits of CMQ is that it provides practitioners with an easy way to quantify work activity at all of the Levels 2-5 of the levels of work activity hierarchy. By rating jobs in terms of their Level 2 activities — a level that is still behaviorally specific enough to allow for meaningful, independent review and validation of rating accuracy — CMQ lets you collect defensible job analysis data.

To obtain scores on the more abstract Levels 3-5 work activity constructs, scores can be empirically derived by combining the Level 2 ratings, giving practitioners a solid, defensible means for quantifying the abstract dimensions of work present in jobs.

In our view, this data-based approach is far superior to the highly subjective approach taken by O*NET and others, which involves using single-item holistic rating scales to directly estimate a job's scores on highly abstract work activity constructs. Many research studies have shown the holistic approach to be fundamentally flawed (see the Documents section).

In comparison, the empirical approach taken by CMQ leaves a clear "paper trail" that directly links scores on the abstract work dimensions back to more detailed, verifiable, GWA item-level ratings of each position.

<>  Back to FAQ topic list


Is CMQ Similar to Fine's Functional Job Analysis (FJA)?

Functional Job Analysis (FJA) was developed by one of the true giants of I/O Psychology, Sidney A. Fine, who began working in this area in the 1930's on the initial version of the Dictionary of Occupational Titles (DOT) for the US Employment Service.

Although most I/O Psychologists typically think of FJA as being a "task oriented" job analysis approach, in fact FJA aruably represented the original common-metric approach to job analysis. That is, the key tenet of FJA is that all job activities can be characterized in terms of their standing on three abstract "worker function" dimensions: i.e., Data, People, and Things.

We gratefully acknowledge the groundbreaking contributions to job analysis made by our late friend and colleague Sidney Fine (seen below, right, showing CMQ author RJ Harvey his copy of the 1955 DOT).

>

What's The Problem?

One might well ask, if FJA is so good, then why did you need to develop a new instrument like CMQ?

Simply put, as excellent and pioneering a technique as FJA is, in our view there were still a number of areas in which FJA could be improved or extended.

  • FJA is typically used in a customized, task-basd fashion. Although FJA's theoretical view that all work activities can be described in terms of three general worker-function dimensions represents the first true "common-metric" approach to job analysis, the fact remains that the way FJA is typically applied is by first developing a customized task listing for each job being analyzed, then making the common-metric ratings of each task on the Data-People-Things scales.

    Although that approach is appropriate when one needs to collect task-level data, its "custom" nature makes it highly labor-intensive. We saw a need for an instrument that could retain the common-metric logic of describing work activity on the worker-functions see in the Data-People-Things hierarchies, while applying it using a standardized list of general work activities (GWAs) to make the process less time- and labor-intensive.

  • Do the worker-function ratings form a unidimensional scale? The way FJA uses the Data-People-Things scales to rate tasks and summarize jobs as a whole is arguably based on several assumptions, including that the individual worker functions reflect a single underlying domain or dimension, and that they are arranged in a hierarchical order (e.g., that on the Data scale, "synthesizing" is higher than "copying").

    There are many reasons to question those assumptions. Our view is that although at a second-order factor level the Data-People-Things scales do represent general dimensions or factors underlying work, multiple first-order work dimensions can also be identified within each of those three major second-order domains. See the Harvey (2004) and Fine, Harvey, & Cronshaw (2004) research papers for additional details.

  • Are the worker-function anchors interval-level measurement? In addition to the question (above) as to whether the Data-People-Things domains are hierarchical, one can question whether the numerical anchors given to each function exhibit the equal-interval properties desired for interval-level measurement.

    There seems to be general agreement that the functions on the People scale may not even exhibit strong ordinal properties, and similar questions can be raised for the Data scale (e.g., is "analyzing" always a less-sophisticated function than "coordinating"?).

  • What if multiple worker-functions from one scale apply? Perhaps the biggest concern with FJA's worker-function rating scales involves the presumably frequent case in which multiple worker functions from a given domain may be applicable to a given work activity. Because FJA only applies a single worker-function rating from each of the three domains to an activity, by definition an incomplete picture is presented when multiple functions are present.

    For example, an activity may involve "negotiating," "persuading," and "instructing", but only one (presumably the highest, "negotiating") will be reported in the job analysis results. Clearly, it would be desirable to be able to regain that lost information, and allow for each work activity to be described in terms of all of the worker-functions that are involved in performing it.

  • How do we combine item-level function ratings to form job-level scores? At an aggregate position- or job-level of analysis, the only quantitative common-metric data produced in a typical FJA study involves the overall summary ratings on the Data-People-Things scales. For example, these are used in the DOT to form the middle three digits of the occupation's DOT code.

    Unfortunately, all methods for summarizing the Data-People-Things ratings of individual activities to form a job-level summary score are to some degree problematic, especially the practice of making a single holistic rating to summarize the job as a whole. Clearly, it would be desirable to have a clear "paper trail" extending from the abstract Data-People-Things work dimension scores for a job back to the item-level ratings made of individual work activities.

  • How do we get work dimension scores that are more detailed than Data-People-Things? The second-order Data-People-Things work dimensions are clearly useful in providing a macro-level summary of the work performed in a given job, as they lie at the "Level 5" degree of abstraction in terms of behavioral specificity (see the levels of work specificity figure).

    However, there are many uses of job analysis information that would benefit from having a somewhat more detailed (but still "big picture") profile for each job using work dimensions defined at the "Level 4" degree of specificity. For example, uses such as skills-transferability or disability claims adjudication and "synthetic" or job-component validation (JCV) require a much more detailed common-metric profile of work activity than is provided by the second-order Data-People-Things constructs.

How Does CMQ Help?

CMQ attempted to address all of the above issues, and find a way to combine the power of FJA's common-metric worker-functions with the administrative efficiencies that derive from rating a standardized listing of general work activities.

We have long been grateful for the influence that Fine's pioneering work exerted on the design of CMQ. For more of our thoughts on the past, present, and future of common-metric job analysis, see the Fine, Harvey, & Cronshaw (2004) symposium presentation we made at the SIOP conference.

In sum, although CMQ was clearly influenced by FJA, in many ways it is quite different. That is, the CMQ items are logically arranged in a matrix type of format, with the rows reprsenting the "objects" or general categories of entities invovled in the activity. Each matrix has a dominant focus involving one of the main FJA functional domains.

The columns represent the various worker functions that are performed in relation to the given row entity, taken from the FJA function that is most relevant to that domain (i.e., People for interpersonal entities, Things for physical and mechanical activities, and Data for activities involving intangibles).

For example, to form the work-activity items rated in the Internal Contacts matrix in CMQ, the row objects represent the general types of people who are contacted (e.g., laborers, clerical, technical specialists, first-line supervisors, executives), and the columns represent the FJA-type functions (activities) performed with respect to the given row object from the People domain (e.g., taking information or orders from them, exchanging information with them, supervising them, negotiating with them, being responsible for their personal safety).

Of course, the CMQ differs from FJA in the sense that unlike FJA, the CMQ does not view the Data-People-Things domains as a unidimensional hierarchy from which we pick the highest element that is applicable to describe each "row" object, or assume that the lower-ranking funtions would also apply. Instead, CMQ views the levels of each of the worker-function domains as reprsenting a range of possible functions that could be performed (singly or in combination with others) with a given entity.

<>  Back to FAQ topic list


How is CMQ Better Regarding Content Coverage?

One of the most important objectives of a common-metric job analysis is to be able to meaningfully compare all jobs, even ones that are highly dissimilar at the task-based level of analysis. This is accomplished by rating jobs on a common profile of general work activities (GWAs), allowing "apples to apples" comparisons to be made.

What's The Problem?

Unfortunately, early standardized "worker-oriented" job analysis instruments were content-deficient in terms of the GWAs they rated. Some instruments were slanted toward coverage of "blue collar" work activities (and lacking in coverage of managerial and related decision-making activities seen in "exempt" jobs). Others were slanted in the opposite direction, providing much more detail in terms of "white collar" jobs, but lacking coverage of the physical, mechanical, and non-managerial decision-making activities seen in non-exempt jobs.

How Does CMQ Help?

One of the main design objectives of CMQ was to comprehensively describe the GWAs seen in both exempt and non-exempt occupations.

As is described in the CMQ Research Monograph, this was accomplished by carefully reviewing the content measured in earlier standardized job analysis surveys, and ensuring that GWA items were included to capture all of the domains of activity seen in all prior surveys.

<>  Back to FAQ topic list


How is CMQ Better Regarding Rating Scales?

In order to achieve the common-metric job analysis goal of being able to meaningfully compare all jobs, in addition to describing a GWA item pool that is comprehensive, an instrument must also describe each GWA using rating scales that are objective and verifiable.

What's The Problem?

Unfortunately, most of the first-generation of common-metric or "worker oriented" job analysis instruments arguably suffered from significant limitations regarding their rating scales.

A major issue concerns the use of "relativistic" rating scales. By "relativistic," we refer to rating scales that ask the rater to judge the GWA being rated relative to other activities on that job, as opposed to rating it in terms of an absolute standard that retains a constant meaning across all jobs. For example, "relative time spent" and "relative importance" rating scales ask the rater to judge each item relative to the "average" activitiy performed on the job (e.g., "more time than the average task" or "much less important than the other activities").

Although this may seem to be a reasonable way to rate work activities, it has the very undesirable effect of producing what are termed ipsative ratings. Such ratings can only be meaningfully compared to other ratings made of the same job, and offer little or no ability to produce meaningful comparisons to ratings of different jobs.

Additional problems arise from the subjective nature of the anchors used on the rating scales. For example, one first-generation instrument rated an item dealing with "responsibility for material assets" on a 5-point rating scale with anchors of "very limited", "limited", "intermediate", "substantial", and "very substantial".

The why not use O*NET? FAQ entry provides several examples of the O*NET holistic rating scales that clearly illustrate the fundamental problems regarding subjectivity and lack of verifiability inherent in any system that attempts to directly rate a highly abstract, hypothetical trait construct using a single-item BARS scale.

In sum, it doesn't take a rocket scientist to realize that a considerable amount of subjectivity is involved in making such ratings, and that serious questions can easily be raised regarding the quality and defensibility of any job analysis database that is collected using such scales.

How Does CMQ Help?

A key design goal of CMQ was that all rating scales used to collect ratings that are used to compute scores on the abstract work dimensions must be as objective as possible, and retain a constant meaning regardless of the job being rated.

Accordingly, no "relativistic" rating scales are used, and the anchors for multipoint rating scales were designed to be as unambiguous as possible. For example, most CMQ items are rated on a Frequency scale, which has clearly defined anchors (e.g., an activity is performed "hourly to many times each hour", "every few hours to daily", "every few days to weekly", "every few weeks to monthly", or "every few months to yearly").

Although no rating scale can remove the need to use some judgment when applying it, it is obvious that ratings made using a Frequency scale of this type are far easier for raters to make — and for others to independently review and verify — than ratings made on "relativistic" scales, or ones using the types of highly subjective anchors cited above.

<>  Back to FAQ topic list


How is CMQ Better Regarding Reading Level?

It is self-evident that a high-quality common-metric job analysis instrument should be constructed using GWA items and rating scales that are comprehensible to the people makign the ratings, verifying the ratings, or using the data to make applied decisions.

What's The Problem?

Unfortunately, many first-generation standardized job analysis instruments were constructed in a fashion that produced very high reading levels. For example, one instrument was estimated to require a post college graduate reading level to effectively comprehend its items.

Using an instrument of that type could be problematic, especially if an organziation decides to collect data by having incumbents self-rate their own positions. Obviously, if raters can't effectively comprehend the items that they're rating, the quality of the data — and the degree to which others are able to independently review and verify their ratings — is seriously in question.

How Does CMQ Help?

To address this issue, CMQ was designed to require a much lower reading level than earlier instruments.

As is described in the CMQ Research Monograph, this was accomplished by targeting a fourth-grade reading level for CMQ items and scales.

<>  Back to FAQ topic list


How is CMQ Better Regarding Data Verifiability?

Given that job analysis ratings often provide the foundation for constructing — and defending — mission-critical applied personnel decisions, it is self-evident that a job analysis instrument will be more effective and useful to the degree that it produces verifiable ratings of work.

What's The Problem?

Unfortunately, most of the other standardized, common-metric job analysis instruments on the market suffer from potentially serious concerns regarding inadequate content coverage, subjective and "relativistic" rating scales, and high reading-comprehension levels. Singly or in combination, these limitations raise serious concerns regarding the degree to which they allow organizations to collect high-quality, independently verifiable ratings of jobs.

How Does CMQ Help?

The net effect of the CMQ design decisions to improve content coverage, increase rating scale objectivity, and lower required reading level is to produce a job analysis instrument that makes it much easier:

  • for raters to use in the first place to make accurate ratings of work activites, and

  • for others to subsequently review and verify the accuracy of those ratings.

<>  Back to FAQ topic list


Is Within-Title Variability or Aggregation Bias a Problem?

One of the truisms about job analysis is that the multiple workers or positions that hold each job in an organization often vary in terms of the activites they perform, or the ways in which they are performed. In many cases, this within-title heterogeneity can reach quite significant levels.

One of the major limitations of other approaches to large-scale occupational analysis — especially the Dictionary of Occupational Titles (DOT) and its successor, the O*NET — is that they essentially ignore the true within-title variability that exists in jobs and occupations. That is, they report a single summary description of the activities performed in an occupation, and offer only limited additional guidance as to which activities "may" be performed (DOT) or which are considered "core" tasks (O*NET).

What's The Problem?

The key problem caused by the presence of within-title variability in work activities concerns aggregation bias. Simply put, when different workers in each job — or the different jobs in a given occupational cluster — truly differ in terms of what work they perform or how they perform it, the summary description formed by averaging across diverse positions and jobs "averages away" the true within-title variability that exists.

Aggregation bias occurs when this overall average profile fails to provide an accurate description of the work activities performed by the diverse positions or jobs that were combined to form it. For example, if in a given job 50% of the workers are required to lift 100-pound objects from the floor to an overhead position, yet the other 50% never are required to do so, it is obviously highly misleading to say that "on average" the job requires workers to perform overhead lifting of 50-pound objects.

How Does CMQ Help?

CMQ addresses this problem in two main ways. First, CMQ items and rating scales were designed to describe reasonably detailed, objective, verifiable characteristics and demands of work. This allows you to determine if the within-job variability you see is due to true differences between positions, or whether it reflects erroneous ratings of the positions.

Second, this CMQ reporting function allows you to assess the degree, and type, of within-title variability that exists among the different positions that make up each job. By making it easy to spot items on which different raters disagree, you can easily review the ratings to determine if the disagreement is valid or erroneous.

<>  Back to FAQ topic list


Is CMQ Useful for SSA Disability Claims Adjudication?

Each year, the US Social Security Administration (SSA) processes 2-3 million new claims for disability status, and denies a sizable percentage of them. If a disability claim is denied, the claimant may pursue an appeals process.

What's The Problem?

Amazingly, given that the DOT has not been updated since the early 1990's, SSA relies on obsolete DOT descriptions of occupations when making disability decisions, and when defending claims denials in the appeals process. Even if the DOT descriptions had been updated, ample reason exists to question the validity of the worker-trait requirements reported in DOT on abstract constucts such as Strength or SVP (see the what's wrong with the DOT FAQ topic).

How Does CMQ Help?

CMQ provides tools to help the vocational experts or attorneys who work with individuals whose SSA disability claims were denied better pursue an appeal:

  • Analyze the Claimant's Actual Job. If is often the case that the way a claimant performed his/her past job does not match closely with the way the occupation is "typically" performed (i.e., as described in the overall summary listed in O*NET or the DOT).

    Using CMQ, you can describe the way your client performed his/her past work, identify the areas of similarity and difference in comparison to the "typical" way it is performed (if the occupation is in our national database), and determine how the job scores on abstract work dimensions (including ones similar to the DOT Strength and SVP scales).

  • Which Occupation Best Matches the Claimant's Past Work? One of the parts of the claims adjudication process that is often surprisingly difficult is determining which occupational title provides the closest match to the claimant's past work.

    If you complete a CMQ describing the claimant's past work, you can use the profile-matching component of the CMQ job description function to identify the O*NET-SOC occupational titles that are most similar to it.

  • Assessing Skills Transferability. Although in many cases SSA's decision to approve or deny a disability claim is driven largely by what the "grids" say — a process that involves a relatively simplistic analysis based on Strength, SVP, age, and education — in some cases a more detailed skills-transferability analysis may be performed.

    The processes typically used to assess skills-transferability involve either comparisons of the DOT narrative descriptions, or quantitative profile-matching based on the numerical DOT ratings for the occupations in question. Both of these approaches can be criticized: in the former case, for subjectivity, and in the latter, for relying on long-obsolete DOT descriptions that were of questionable quality even when new, and that arguably do not provide adequate detail in describing the work activities required of employees.

    The profile-matching component of the CMQ job description function can also be used to to identify the O*NET-SOC occupations that best match the claimant's residual functional capacity. Simply administer a CMQ and rate all of the work activities the individual can still perform given their medical limitations, and then see if any occupations provide a close match to that skills profile.

  • Within-Title Variability. One of the major limitations of DOT and O*NET is that they don't provide useful information to indicate the degree to which a given occupation is performed differently across settings, or across the different jobs that are clustered to form one of their occupational groupings.

    With CMQ, you can search our national database for a given occupation to see how much variability across rated positions exists in that title on any specified work activity, or on overall work dimension scores similar to the DOT Strength and SVP scales. (Not yet available, coming soon)

<>  Back to FAQ topic list


How Can CMQ Help for Career Guidance and Exploration?

People see information in the areas of career guidance and exploration for a range of reasons:

  • Students: Whether selecting a major in college, pursuing technical training, or figuring what kinds of jobs to seek after completing school, students have a strong need to obtain useful information that can help them make career choices.

  • Unemployed Workers: Thanks to outsourcing, downsizing, or unfavorable economic conditions, many individuals who want to work may suddenly find themselves out of a job. In such cases, being able to identify different occupations that peform similar general work activities and require similar skills-sets can be helpful in broadening the list of job openings one pursues.

  • Dissatisfied Workers: Many individuals find themselves in the position of having a job, but wishing they could have a different one that would be a better fit to their personality and abilities. Being able to identify a range of occupations whose predicted personality and ability profiles are similar to your own could be very helpful in letting you investigate jobs in which you might feel more comfortable.

What's The Problem?

Existing career guidance tools are useful to a point, but few offer a solution that focuses on:

  • Quantitatively matching a person's own unique personality and ability profile against the expected profiles of hundreds of different occupations in the economy.

  • Determining what kinds of general work activities a person would like to perform, and then quantitatively matching that unique profile with the actual work-activity profiles of hundreds of different occupations.

How Does CMQ Help?

When using CMQ for career guidance purposes, we take two main approaches:

  • Personal-Trait Matching: After completing our online test battery, we are able to quantify a person's unique personality and ability profile. We then compute profile-similarity statistics to find the best matches to that profile, using the JCV-based predicted ability and personality profiles of hundreds of different occupations.

  • General-Work-Activity Matching: After completing the CMQ and indicating the general work activities that a person would like to perform on their ideal job — or by listing all of the activities that a person can perform based on past work, education, and training — we can then match that unique work-dimension profile against actual profiles of hundreds of different occupations.

Although we obviously cannot guarantee that an individual would indeed actually be successful at — or enjoy performing — the occupations that provide the closest match to their personality/ability or desired work activity profiles, we think that our two-pronged quantitiative approach offers much useful information to those seeking career guidance assistance.

That is, the key aspect of common-metric job analysis is that it offers a tool that can identify jobs and occupations that may differ dramatically in terms of their technologically detailed tasks, but that are actually quite similar in terms of their underlying general work activities. People are often quite surprised to find that the skills-set of GWAs they have acquired in their past jobs matches the requirements of different occupations that they would never have thought were similar.

<>  Back to FAQ topic list


How is CMQ useful for "synthetic" or job-component validation (JCV)?

One of the biggest challenges in the areas of employee selection and disability concerns the need to determine what levels of various person-side traits are required to be able to successfully perform a job or occupation.

Employers need to identify the worker requirements of jobs when screening applicants. Vocational experts and others involved in the disability claims adjudication process need such information to identify occupations that a disabled worker could still perform, given their residual functional capacity.

If the job-requirements process is performed at a behaviorally specific, skills-based level of analysis (e.g., using the Level 2 GWA items measured in CMQ), then this is actually a relatively simple task:

  • The "required skill" profile for a job is defined as the list of Level 2 GWA items from the CMQ that are rated as being performed on the job, and for which competence is required at time of hiring.

  • The "desirable skill" profile is defined as the Level 2 GWA items performed on the job, but for which competence is acquired after hiring (i.e., via training or experience).

  • Individual applicants are assessed in terms of their demonstrated capacity to perform the GWAs in the required- and desired-skill lists.

  • Person-job matching for selection purposes is performed by ranking applicants based on the closeness of their match to the two GWA skills profiles.

What's The Problem?

In contrast, when setting requirements on abstract person-side trait constructs in the physical ability, cognitive ability, or personality domains, there is simply no easy way to determine the minimum levels of those traits a person must possess before being able to perform the work. That is especially true when (as is often the case):

  • Multiple person-side traits are required for successful performance.

  • Standards are set for occupations that show nontrivial true within-title variability in the work activities performed across the different jobs subsumed under them, or for jobs that exhibit high within-title variability across the multiple positions that perform them.

  • Compensatory relationships exist between the multiple predictor traits (i.e., being high on one trait may compensate for having a lower score on another). If compensatory relationships exist among the multiple predictors, it may well be impossible to identify "the" true minimum-required profile across the traits.

In our assessment, the holistic approach used in O*NET and the DOT represents the worst-possible approach to setting worker-trait requirements. In what can only be described as well-meaning guesswork, in the holistic approach raters who claim to be knowledgeable about the job use vague single-item rating scales to directly rate the amount of a hypothetical, latent, personal trait needed to perform the work.

Not surprisingly, numerous research studies (e.g., see our Documents section) have documented the fact that single-item holsitic ratings produce data of highly questionable psychometric quality, and totally indeterminate validity.

How Does CMQ Help?

One strategy that has been advanced to deal with the challenge of linking job-side descriptions of work demands to person-side trait requirements is job-component validation, or JCV. The Documents page contains a section on using "synthetic" validity methods such as JCV to predict person-side trait requirements from profiles of scores on common-metric work dimensions.

Our position on JCV is that it is a highly useful tool for some HR applications, but that it is not a panacea, and that as always, practitioners must carefully assess the legal-defensibility status of any use they make of job analysis data to guide applied personnel decision-making. A more detailed discussion of these issues can be found in:

  • Harvey, R. J. (2010). Motor oil or snake oil: Synthetic validity is a tool, not a panacea. Industrial and Organizational Psychology, 3, 351-355.

  • Harvey, R. J. (2012). Chapter 8: Analyzing work analysis data. In Wilson, Bennet, Gibson, & Alliger (Eds.), The handbook of work analysis: The methods, systems, applications, and science of work measurement in organizations. Psychology Press/Routledge, Taylor and Francis Group (ISBN-10: 1848728700).

  • Harvey, R. J. (2011, April). Deriving synthetic validity models: Is R = .80 large enough? Paper presented at the Annual Conference of the Society for Industrial and Organizational Psychology, Chicago.

In the CMQ system, we use JCV for several purposes, including:

  • Career guidance: using a method for finding occupational matches that compares an individual's profile of scores on ability and personality traits against the trait profiles for occupations predicted using JCV.

  • Employee selection: the person-side trait profile for a given job analyzed using CMQ is predicted using JCV equations.

  • Compensation rates: the expected level of pay for a job analyzed using CMQ is predicted using a JCV-type policy capturing equation that predicts national market pay rates from CMQ work dimension scores.

  • Disability and rehabilitation: the expected score for the DOT Strength and SVP scales for a job analyzed using CMQ is predicted using a JCV-type policy capturing equation derived predicting those DOT scales from CMQ work dimension scores.

<>  Back to FAQ topic list


Why Not Use O*NET Instead?

The US Department of Labor has spent many tens (if not hundreds, by this time) of millions of taxpayer dollars in an effort to develop a replacement for the long-obsolete Dictionary of Occupational Titles (DOT). Their new system is called the Occupational Information Network, or O*NET.

O*NET was supposed to satisfy the needs of former DOT uses, and do much, much more to help public- and private-sector employers make critical HR decisions, including staffing and setting selecton requirements.

What's The Problem?

Simply put, in the opinion of many I/O Psychologists and others, O*NET has been an unmitigated failure, especially with respect to meeting its initial goals of providing high-quality occupational data that would be useful for setting worker trait requirements of jobs.

In particular, the data reported by O*NET describing both work activities and required worker traits has been criticized as being highly subjective, exhibiting poor psychometric properties, and lacking any convincing evidence regarding the validity of its ratings of abstract work-activities and worker-trait requirements. See the O*NET Problems topic in the Documents section for research studies examining the psychometric issues.

Is O*NET Really That Bad?

Yes.

To illustrate the degree to which the single-item holsitic rating approach used by O*NET is so problematic and unsuitable for use in a high-stakes HR application, consider a few of the actual O*NET rating scales used to rate the person-side ability traits needed to perform a job.

See for yourself — the examples selected below don't even pass the "laugh test." Unfortunately, the scales shown below are typify the highly subjective holistic data-collection philosophy that lies at the core of O*NET.

 

>

In the example above, the actual O*NET ratings of the occupation Industrial-Organizational Psychologist are shown by the green and red diamonds (using the original, and updated, ratings). Several obvious problems with this holistic rating scale can be seen:

  • In addition to being highly subjective (e.g., precisely what is the difference between "Very Important" and "Extremely Important"?), the Importance rating scale is within-job relativistic. That is, the rater is not asked to describe in an absolute sense how important this trait is, but only how important it is relative to performing the job in question.

  • The Level rating (which is the one used to set employee requirement cutoffs) only uses three rating anchors (at 2, 4, and 6), providing absolutely no guidance as to what kinds of behaviors reflect the 1, 3, 5, and 7 points.

  • The location of the anchors is questionable. For example, is the difference in the amount of Oral Expression needed to "cancel newspaper delivery by phone" versus "give instructions to a lost motorist" really the same amount (2 units) as the difference between "give instructions to lost motorist" and "explain advanced genetics to college freshmen"? The latter would seem to be a far greater difference.

  • The content of the anchors is problematic, given that other than by pure chance, the behavioral anchors for each scale will describe behaviors that are not performed on the job being rated!

  • The validity of the Level rating is highly questionable, given the degree to which I/O Psychologists are required to engage in high levels of oral expression (e.g., when testifying in court as an expert witness, when lecturing to undergraduates and graduate students, when explaining the results of complicated statistical analyses or procedures to clients).

 

>

Lest one think the above-noted problems with the Oral Expression scale are isolated examples, consider a very straighfoward trait in the physical domain shown above, Wrist Finger Speed. Obvious problems include:

  • The definition of the trait provided is somewhat complex, raising questions regarding the degree to which different raters would interpret it identically.

  • The anchors are problematic, only anchoring 3 points of the 7-point scale, and not coming close to being equally spaced along the scale.

  • The location of the anchors is highly questionable. That is, is the amount of Wrist-Finger Speed required to "carve roast beef in a cafeteria" versus "type 90 words per minute" really four times the amount of the difference between "use a manual pencil sharpener" versus "carve roast beef in a cafeteria"?

  • The validity of the displayed ratings of I/O Psychologist is highly questionable. Given that work activities involving Wrist-Finger Speed obviously include using a keyboard on a computer, and that I/O Psychologists typically spend a considerable amount of time using a keyboard to perform word processing, presentation preparation, and data-analysis tasks, rating this trait as being effectively irrelevant to that occupation is fundamentally inaccurate.

 

>

Likewise, consider another very straighfoward trait in the physical domain, Trunk Strength. Serious questions again include:

  • The location of the anchors: is the difference in the amount of Trunk Strength needed to "sit in an office chair" versus "shovel snow for half an hour" really the same as amount of difference between "shovel snow for half an hour" versus "do 100 sit-ups"?

  • The ordering of the anchors is even questionable, as one might argue that "shovel snow for half an hour" (especially deep, wet snow) is a more demanding task than "do 100 sit-ups" (especially given the lack of a specified time limit for doing so).

  • The validity of the rating for I/O Psychologist is questionable. Although this is one of the relatively rare cases in which an O*NET anchor — "sit up in an office chair" — actually reflects something a worker in the rated job would do, the rating that was given is lower than the anchor for an activitiy that is obviously performed!

 

>

Finally, in a classic illustration of how fundamentally flawed the O*NET holistic rating approach actually is, consider a trait in the psychomotor domain, Response Orientation. Problems include:

  • Highly complex and confusing definition of the trait, which is written in such a fashion that even trained job analysts might have considerable difficulty interpreting it consistently.

  • Absurd anchors, especially the "out of control spacecraft" one. How many people, including trained job analysts, have any idea what is invovled in restoring control in an out-of-control spacecraft? Perhaps all an astronaut needs to do is push a large button labled "automatic stability control", and no complicated choices or movements are required at all?

  • Fundamental validity questions. Despite the fact that that this is another of the rare cases in which an O*NET anchor actually corresponds to a job activity — I/O Psychologists may well have to deal with cases in which their phone rings at the same time someone knocks on their door — the ratings reported in the O*NET database are lower than an obviously-applicable anchor.

How Does CMQ Help?

CMQ was designed to remedy many of the serious design flaws and limitations seen in most first-generation "worker-oriented" job analysis surveys, as well as the holistic rating problems seen in O*NET and the DOT (which holistically rated Strength, SVP, and other abstract constructs).

In particular, CMQ offers practitioners improvements in several key areas:

  • Provide comprehensive content coverage with respect to the general work activities performed in both managerial (exempt) and non-managerial jobs.

  • Use rating scales that are specific, verifiable, and non-relativistic.

  • Use items written at a reading level that most job incumbents can comprehend.

  • Use GWA items that are behaviorally specific enough to allow each rating to be independently reviewed to verify accuracy.

<>  Back to FAQ topic list


Why Not Use DOT Descriptions?

Many organizations continue to rely on the DOT, including the US Social Security Administration when adjudicating disability claims.

What's The Problem?

There are many serious limitations with the DOT, and in our assessment, it's nearly as flawed as O*NET in terms of providing high-quality data that HR, vocational, disability, and other practitioners can use in high-stakes applications.

  • Woefully obsolete. The DOT was last updated in the early 1990's, and many of its descriptions were last updated decades earlier.

  • Aggregation bias. Although the approximately 13,000 DOT occupational titles are more fine-grained and detailed than the much smaller number of occupational clusters rated in O*NET (1,200) and SOC (800), the summary DOT descriptions offer little guidance in terms of identifying how much true within-title variability in work actually occurs in an occupation.

  • Questionable validity. Even its own developers (DOT, 1955) made it clear that the DOT's listings of worker-trait requirements for occupations were highly subjective in nature, and that they should only be seen as providing rough guidance as to required traits.

Regarding the latter issue, it is important to note that the 1955 Estimates of Worker Trait Requirements for 4,000 Jobs as Defined in the Dictionary of Occupational Titles was the first version of the DOT to report large-scale listings of “trait requirements” for occupations (see a picture of Drs. Fine and Harvey perusing the 1955 DOT).

It contained a telling section entitled Cautions in Use of Data:

"The user is again reminded that the trait information contained in this volume represents the considered judgments of experienced occupational analysts. In addition, such judgments were based on job information contained in the [DOT] definitions which frequently do not completely reflect the type of trait information presented here. This is primarily due to the fact that [DOT] definitions were not intended to serve that purpose. The data, therefore, should be used critically as a general guide and as a source of suggested trait information" (pp. viii – ix, emphasis added).

In other words, the inferences reported in the DOT regarding such abstract requirements as Strength and SVP are highly speculative, and lack any evidence regarding validity. This fact was duly noted by the DOT's developers, but subsequently ignored by decades of DOT users.

All of the problems with "holistic" rating strategies that have been raised regarding the O*NET are equally applicable to the holistic overall ratings reported in DOT, especially for key DOT constructs like Strength and SVP that play such a crucial role in the SSA disability claims adjudication process.

How Does CMQ Help?

CMQ addresses the limitations of the DOT in the same fashion that it addresses the flaws of O*NET.

Simply put, the only major flaw of the O*NET system that is not also present in the DOT system involves the occupational title taxonomies they use. To its credit, the detailed DOT title taxonomy tends to produce occupational clusters that have much less true within-title- variability in work activities and worker requirements than the simplistic SOC-based system used in O*NET.

<>  Back to FAQ topic list


What are the "Two Worlds of Work"?

Dunnette (1976) identified “two worlds of human behavioral taxonomies” that were the primary focus of I/O Psychologists.

  • Person-Side: This domain consists of the abilities, traits, skills, and other personal characteristics that are relevant to job peformance.

  • Job-Side: This domain consists of the activities and contextual characteristics and demands that define what people do when performing a job.

As the figure below indicates, both the person- and job-side domains can be though of as having a hierarchical organization, and varying across a continuum of behavioral specificity. Thus, detailed, easy-to-observe characteristics lie toward the bottom of each hierarchy, with abstract, theoretical, hypothetical constructs that are not amenable to direct measurement or observation at the top.

>

What's The Problem?

Earlier in the history of I/O Psychology, there seemed to be no difficulty in grasping the fact that fundamental differences exist between these "two worlds" of characteristics. In particular,

  • Person-side elements are properties of people whereas job-side elements are activities that jobs require people to perform.

  • Job analysis is the process of describing the job-side activities that are required of workers, whereas psychological measurement is the process involved in assessing the person-side traits possessed by individuals.

Unfortunately, in recent decades, a number of I/O Psychologists have engaged in what we find to be a highly counterproductive effort to blur the distinction between the person- versus job-side domains of content. Some have gone so far as to attempt to fundamentally re-define the term "job analysis" from its correct definition — i.e., the description of the job-side activities that workers are required to perform on a job — to a fundamentally incorrect defintion stating that job analysis involves both describing the required job-side activites as well as inferring the person-side traits required to be successful in a job.

Presumably, the motivation behind this effort to fundamentally re-define the term "job analysis" is a somewhat cynical attempt to reduce the burden that exists on users of job analysis data to make applied personnel decisions (e.g., setting minimum trait requirements for jobs for employee screening) to offer convincing evidence of the validity of such inferences and decisions.

How Does CMQ Help?

CMQ was designed to focus solely on the descriptive process of identifying the job-side activities, contextual characteristics, and other demands that jobs require of workers. It does this by focusing strictly on rating Level 2 items that are sufficiently objective and behaviorally specific to allow easy rating, and independent verification for accuracy.

Nowhere in CMQ are raters asked to speculate or infer any types of abstract or hypothetical characterics, on either the job-side (as some earlier "worker-oriented" instruments did, and the O*NET continues to do) or on the person-side (which O*NET does as well, by directly rating the human ability requirements of occupations on single-item holistic scales).

As publishers of the CMQ, we stress that all responsibility for using CMQ data to make applied personnel decisions falls entirely on the practitioner. Our goal is to give practitioners the tools they need to collect accurate data describing the work activities required by their jobs. However, the responsibility for actually collecting accurate ratings of job-side activities, for reviewing and verifying the accuracy of CMQ ratings, and then using such data to make sound inferences regarding applied HR and personnel decisions, is solely the practitioner's.

<>  Back to FAQ topic list


What is the "Third World of Work"?

Dunnette (1976) identified the "two worlds of human behavioral taxonomies" — i.e., job-side work activities and person-side personal traits — that many see as being the primary focus of I/O Psychologists. However, there is actually a "third world of work" that is at least as important as those two.

This third hierarchical domain is highly relevant to the job analysis process. Here, the question being addressed is "how are job-side activities combined by employers to form the 'work' that employees are hired to perform?" In other words, the question of job classification or job family formation.

As the figure below indicates, this third hierarchy is also organized vertically based on behavioral specificity, with highly detailed collections of activity performed by individual positions at the bottom, and clusters of similar positions grouped to form jobs at the next level.

Note that both of these levels exist within an individual organization. In terms of which of these organizational structure entities are "real," only the position has a direct tie to objective reality — all higher levels of aggregation come into existence as the result of a judgment process that determines which lower-level entities are deemed "similar enough" to one another to be combined to form a cluster.

The three higher levels of the hierarchy exist across organizations, with clusters of similar jobs being grouped to form DOT-type occupations, which can be further aggregated to form SOC-level occupations and the "occupational units" (OUs) of the O*NET. At the highest level of abstraction, abstract job families of the type seen in validity-generalization (VG) applicatgions can be formed as clusters of similar SOC-level occupations.

>

Clearly, some advantages are gained in the areas of simplicity and parsimony by creating higher-level clusters to define the "work which exists in the economy" (a phrase that is key to the way the Social Security Administration adjudicates disability claims). For some purposes, such as describing macro-level economic and employment trends, taking an abstract view of work may clearly be useful.

What's The Problem?

Unfortunately, an increasingly serious price is paid as we move to higher and higher levels of behavioral abstraction to describe how "work" exists in the economy. Namely, the further up the hierarchy one goes, an inevitable increase in true within-title variability in the occupational entities results.

For example, even when considering relatively specific DOT-type occupations (the DOT title taxonomy defined approximately 13,000 distinct titles in its last revision), considerable true variability in the way work is performed may still exist across different organizations, geographic regions, industries, etc., in which the work is performed.

However, when DOT-type occupations are grouped to form SOC occupational titles — or the similarly abstract "OUs" described in O*NET — massive levels of true within-title variabilty are often present.

The listing below shows the SOC titles that contain 50 or more occupational titles that are considered to be distinct occupations in the DOT — 16 of these have 100 or more DOT titles in a single SOC title, and 4 have 500 or more DOT titles in one SOC.

>

The "all other" titles (e.g., "Managers, All Other") represent an especially eggregious construction. By their very name, they make it clear that the developers of the title taxonomy have failed in the task of identifying a coherent, meaningful cluster of jobs or DOT occupations. Instead, the "all other" SOC and ONET-SOC titles comprise a hodgepodge of heterogeneous titles, with the primary thing they have in common being the simple fact that they don't meaingfully fit in any of the other clusters.

How Does CMQ Help?

CMQ was designed to give practitioners a tool they can use to help manage the serious problems caused by true within-title variability and aggregation bias.

The key to detecting and reducing aggregation bias is to rate all positions within each existing job title when you plan your job analysis data collection project. Or, for large jobs that are performed in different departments, organizational units, geographic areas, etc., to make sure that positions are sampled and rated from all of the various subgrouping factors that might be associated with different ways of performing the job.

Yes, it's quicker and easier to simply complete one CMQ per job title in an "aggregate" fashion than it is to rate multiple positions that hold each job.

However, if you don't describe at least a subset of the different positions holding a job, it is impossible to determine the amount of true within-title variability that is present, and the degree to which your existing job title system needs to be revised to form more homogeneous titles.

And, if you don't describe all positions within each job, if there are significant issues with true within-title variability, you won't be able to effectively split-up an overly-heterogeneous existing job title to form homogeneous new job titles. That is, if you reassign titles, everyone holding the old title must be reassigned to one of the new titles. For positions you haven't rated, you won't know which new title they should be assigned to.

There are many reasons why it is important to be able to assess within-title variability. For example:

  • ADA compliance: When identifying the essential functions of a job, one straightforward way to determine what is or is not essential is to determine whether all positions in the job indeed perform a given activity. If a nontrivial percentage of position incumbents do not perform a given activity, it's difficult to consider it "essential" to the job.

    Likewise, when developing strategies for reasonable accommodation, examining the activities that only a subset of positions perform — especially ones that might otherwise be considered critical or essential — may provide insights into why some workers can perform their jobs without doing the activity.

  • Developing minimum requirements: The always difficult task of determining where to set cutoff scores on various person-side traits used for employee screening is made much more complicated when significant true within-title heterogeneity is present. If meaningfully different "sub-jobs" are present within your job titles, they may very well have different requirements in terms of the personal traits needed to perform them.

  • Compensation: Likewise, if meaningfully different "sub-jobs" are present within your job titles, this has important implications for where that job should be slotted in your compensation system grade structure. If differences are large enough, the "sub-jobs" in each title might actually belong in different paygrades.

  • Job descriptions: One ofthe key problems caused by within-title variability is aggregation bias — i.e., the situation in which a job-aggregate summary profile of work activities fails to accurately describe the activities performed on the individual positions in the job. Obviously, aggregation bias is highly undesirable when forming accurate job descriptions, and if significant within-title variability is present, the job should be split and the "sub-jobs" within it should be given different titles.

<>  Back to FAQ topic list


What is a Position?

A position is the only "real" organizational structure entity; it consists of the work activities and responsibilities that are assigned to a single worker. If no positions are vacant, the number of positions in an organization equals the number of employees.

What's The Problem?

People often misuse the term "position" to refer to a job, which is a collection of similar positions that share a job title and job description.

Of greater importance, when multiple positions perform a given job, it is usually the case that true within-title variability will exist in terms of how the work is performed, or which activities are performed by different workers in the job.

How Does CMQ Help?

We strongly recommend that when you plan your job analysis project, you administer a CMQ to rate all positions that perform each job being analyzed.

If you rate each of the positions within a job, CMQ allows you to quantify the amount of cross-position, within-title variability that is present in your jobs in the job description reporting function. By examining the within-title variability in CMQ ratings across positions, you can determine whether the ratings disagreements reflect:

  • true differences in how the job is performed (in which case, you may want to revise your job title system to reduce within-title heterogeneity), or

  • rating errors that should be fixed before you use your job analysis database.

<>  Back to FAQ topic list


What is a Job?

Unlike a position, which represents the set of objectively verifiable work activities assigned to one actual employee, a job is not a "real" organizational structure entity. Rather, a job is a hypothetical entity developed for our administrative convenience, defined as the collection of positions that are deemed to be similar enough to one another to be able to share a common job description.

What's The Problem?

People often misuse the term "job" to refer to a position (a more specific organizational entity), or to an occupation (a more abstract organizational entity).

Of greater importance, when multiple positions perform a given job, it is usually the case that true within-title variability will exist in terms of how the work is performed, or which activities are performed by different workers in the job.

How Does CMQ Help?

We strongly recommend that when you plan your job analysis project, you administer a CMQ to rate all positions that perform each job being analyzed. This will allow you to use the job description reporting function to examine the amount of within-title variability present in each job.

If after reviewing the ratings and fixing any rating errors that are present you still find that significant within-title variability is present, you should revise your job title system (i.e., the way in which positions are assigned to job titles) to reduce true within-title heterogeneity.

This may require creating additional job titles, or splitting existing titles into more cohesive clusters. You can perform these actions using the CMQ database manager.

<>  Back to FAQ topic list


What is an Occupation?

Like a job, an occupation is a hypothetical entity developed for our administrative convenience, defined as the collection of jobs that are deemed to be similar enough to one another to be able to share a common occupational title. Occupations are more abstract than jobs.

A pictoral view of the hierarchical organization of units of work can be seen in the levels of work figure.

There are 840 titles in the Standard Occupational Classification (SOC) system, which is a widely used taxonomy for describing the overall structure of occupations in the economy. The O*NET-SOC system is a somewhat more specific taxonomy of occupational titles, which splits the 840 SOC titles into 1,110 O*NET-SOC occupations. The wizard that you use when you define each of the jobs in your CMQ database uses the 2010 version of the O*NET-SOC occupations.

What's The Problem?

People often misuse the term "occupation" to refer to a job, which is a more specific organizational structure entity than an occupation.

Of greater importance, it is often the case that very significant levels of true within-title variability exist within occupations that are defined at the SOC or O*NET-SOC level of abstraction. Each individual SOC or O*NET-SOC occupation may consist of multiple occupations that were considered to be separate in the older 13,000-title Dictionary of Occupational Titles (DOT) taxonomy.

Indeed, dozens, hundreds, and in some cases, thousands of occupations that were given distinct DOT titles were clustered together to form a single ONET or O*NET-SOC occupation.

How Does CMQ Help?

Although the CMQ system asks you to select the best match in the O*NET-SOC occupational title system for each of your jobs when you define them in the CMQ database, we realize that it is often difficult to make a confident match to the abstract titles in SOC or O*NET-SOC.

One of the major advantages CMQ provides you is the ability to collect your own on-site job analysis data to describe the work performed on the jobs in your organization. By doing so, you do not have to rely on the occupation-level summaries provided by the O*NET database or the now-obsolete DOT, which suffer from numerous concerns:

  • It is often difficult to obtain a high-confidence match between your own jobs and the occupational titles in SOC or O*NET-SOC.

  • Even if you can make a high-confidence match to an occupational title, serious questions have been raised by researchers regarding the lack of psychometric quality and accuracy of the data reported by O*NET and DOT.

  • The substantial true within-title variability in many O*NET-SOC and SOC titles causes aggregation bias, the situation in which in the "average" profile reported for each occupation does a poor job of describing the work requirements of the jobs and DOT-level occupations clustered together to form the SOC title. As the amount of true within-title variability increases, aggregation bias unavoidably makes the SOC or O*NET-SOC level occupational summary provide an increasing inaccurate description of the work performed on the heterogeneous jobs within it.

Clearly, when you need to make mission-critical applied decisions regarding work requirements or employee-selection standards, you need to base those decisions on high-quality, verifiable job analysis ratings. In our assessment, there is no substitute for collecting your own job analysis data, and subjecting the ratings to a rigorous accuracy review.

<>  Back to FAQ topic list


What is a Task?

At its simplest definition, a task is a behaviorally and technologically specific description of a work activity that produces a meaningful outcome on the job in question. Typically, task statements contain an explicit action verb and the object of the action, and may also contain information regarding the results of the action, the types of tools or equipment used in performing it, and the degree of discretion the worker has when performing the task.

Task-based job analysis is useful for some HR functions, such as developing a detailed training program, developing a work-sample test, composing a licensing or certification exam, etc. Tasks form the bottom of the hierarchical view of describing work illustrated in the levels of work activity figure.

What's The Problem?

Task-based job analyses have two main problems:

  • They are very time- and labor-intensive to conduct, often requiring expensive job analysts to conduct, and extensive interviews with job incumbents or SMEs.

  • They offer only very limited ability to make meaningful comparisons between jobs that differ in terms of their specific tasks. For many HR functions, the ability to make such comparisons is critical (e.g., developing compensation systems, forming job families, using "synthetic" validity equations to predict worker trait requirements, for career guidance, etc.)

How Does CMQ Help?

CMQ helps practitioners who do not require a Level 1, task-based analysis by providing an easy way to quantify work activity at Levels 2-5 of the levels of work activity hierarchy. By rating jobs in terms of their Level 2 activities — a level that is still behaviorally specific enough to allow for meaningful, independent review and validation of rating accuracy — CMQ lets you collect defensible job analysis data. Scores on the more abstract Levels 3-5 work activity constructs can then be empirically derived from those defensible Level 2 ratings.

CMQ is also helpful for those who do need a task-based job analysis. That is, rather than having to sit down with a blank sheet of paper and try to get job incumbnets to describe all of their job's tasks to a job analyst, organizations can first administer CMQs to positions holdling the job in question.

Once the CMQs are completed, you can then use the CMQ job description (in particular, the list of all GWA items that were rated as being applicable) as the basis for organizing the task-writing process. For example, if a GWA such as "engaging in external contacts with executives outside your organization for the purpose of negotiating" is rated as being performed, the analyst can then ask the incumbent to list out the specific tasks invoved in performing that GWA.

Although this approach does not remove much of the labor-intensiveness from the task authoring process, it offers the benefits of:

  • Providing a conceptual organization to the task-writing process, allowing the analyst to focus the discussion on general activity areas already known to be applicable.

  • Offering further assurances regarding the content coverage adequacy of the job analysis, by virtue of ensuring that all of the relevant GWA categories that are relevant have been identified, and using that information to probe the incumbent to list the more detailed tasks corresponding to each.

<>  Back to FAQ topic list


What is a General Work Activity (GWA)?

A general work activity (GWA) is a descriptor of work that occupies an intermediate level of behavioral specificity, being more abstract than a task, but less abstract than a work dimension.

The so-called "worker oriented" approach to job analysis popularized in the 1960's by McCormick and colleagues — which is more correctly termed common-metric analysis — was developed to address the need to be able to make meaningful comparisons between task-dissimilar jobs. Many use the term GWA to refer to the types of items rated in standardized job analysis questionnaires that are designed to describe most, or all, jobs in the economy.

What's The Problem?

People often mistakenly view the task-based versus common-metric approaches to job analysis as producing qualitatively different types of information describing work.

How Does CMQ Help?

In fact, tasks, GWAs, and work dimensions do not describe qualitatively different things at all. Rather, they represent three different levels of behavioral abstraction that one can take when describing what kinds of activities are required of workers on jobs.

In the figure below, the hierarchical nature of the job-side of Dunnette's (1978) "two worlds of work" is described. On the job side, the continuum ranges from highly technologically-specific tasks at the bottom through highly abstract work activity constructs (work dimensions).

GWAs define the content in the middle range of this continuum.

For convenience, the job-side continuum has been logically divided into five general categories: Level 1 corresponds to task-level data, Levels 4-5 represent abstract work dimensions, and levels 2-3 correspond to what most people mean when they use the term "GWA" to describe work.

>

<>  Back to FAQ topic list


What is a Work Dimension?

A work dimension is a hypothetical construct that describes required work activity on a job at a high level of behavioral abstraction. Work dimensions are often identified via factor analysis of more detailed GWA item ratings in common-metric job analysis instruments.

Perhaps the most widely known work dimensions are the Data, People, and Things (DPT) constructs that form the core of Sidney Fine's Functional Job Analysis theory. As can be seen in the levels of work activity figure, the DPT constructs form the top of the pyramid of content.

Interestingly, although they were developed via rational means by Fine, ample research evidence exists (e.g., Harvey, 2004) to support the presence of these three constructs in higher-order factor analyses.

What's The Problem?

A big problem facing job analysis practitioners has been the question of identifying an accurate, valid, psychometrically defensible method for measuring the level of these abstract work dimensions in a given job.

Many work analysis systems — including the O*NET and DOT — have used holistic rating methods to describe the levels of work dimensions present in jobs. All of the abstract O*NET ratings, and most of the more critical DOT ratings such as Strength and SVP, rely on the holistic rating method to produce their work dimension scores.

Unfortunately, ample research (see the Resources section) has shown that it is effectively impossible to collect high-quality, reliable, and demonstrably valid or accurate ratings of abstract work dimensions using single-item holistic rating scales.

How Does CMQ Help?

CMQ uses a "decomposed judgment" approach to measuring work dimensions, based on combining ratings of more specific, verifiable Level 2 GWA items using factor-analytic scoring methods to estimate each work dimension. Literally hundreds of individual item ratings are combined when computing the factor-score estimate of each work dimension.

Using this approach, as long as you have taken the time to review and verify your CMQ item ratings in the CMQ job description function and correct rating errors as needed, you have a clear "paper trail" directly linking the abstract work dimension scores back to detailed, verifiable CMQ item ratings.

The 23- and 71-factor CMQ work dimension scoring systems represent the current way in which CMQ combines item-level response to estimate Level 4 work dimensions.

<>  Back to FAQ topic list