Two Tools for Managing Applicant Flow

There are literally hundreds if not thousands of products in the marketplace promising solutions to managing applicant flow. But, how do you know which ones to use, and, which ones to trust? Let’s start with the trust part.

Wrong-Headed Testing

I’ve found that many people think they can author a screening test by sitting down, making a list of questions, and adding up the answers. Sorry: that approach is likely to deliver unstable results, have nothing to do with job performance (e.g., neither overall nor specific), and will probably deliver false information. Professionals always start with a good theory of job performance.

For example, consider something like job fit. This theory predicts that, among people of equal skills, people who like their job will outperform those who do not. A weak theory, on the other hand, would be that handwriting predicts job performance. Good theories of job performance are always based on expert proof, not personal opinion. Weak theories hide behind claims that, while they don’t actually predict job performance, can “help” make a hiring decision. Can you spell verbal-shell-game? Ask yourself, how can a test “help” improve the quality of hire if it does not have anything to do with job performance? Hmmm?

In general you want to look for tests that predict elements of job fit, job skills, and job attitude. Job fit tells you whether the person will like the job or not. Job skills tell you whether the person has the right skills to do the job. And, job attitude tells you whether the person wants to do a good job. Remember, though, the actual elements of fit, skills, and attitude will change from one job to another.

Once a developer has a good theory, a test needs to be developed and proveN. This requires a lot of trial and error, edits, re-edits, internal analysis, and finally multiple studies showing test scores predict some aspect of job performance. Do you really want to make $10,000 decisions without having proof a test works? If you make it a habit to believe unsupported claims, I have a deal for you. First, give your candidate a cup of tea, save both the cup and leaves, pack them carefully, and send them to me with a cashier’s check for $1,000 (to cover shipping and handling). I’ll send you a free report that won’t actually predict job performance but will help you make a better hiring decision.

OK. Now that you have tossed out 90% of the bogus tests , here are two strategies for handling a large applicant flow and reducing early turnover. In later articles I’ll talk about improving organizational bench strength and improving sales.

Managing Expectations

Some organizations are fortunate enough to have a long queue of applicants. Attempting to reduce the applicant flow, they often purchase resume analysis software. While this kind of software may reduce labor, would someone please tell me how analyzing thousands of documents filled with misleading information can produce anything better than dozens of documents filled with misleading information? Organizations need tools that actually screen for job-related performance.

Screening applicants is a good place to use web applications such as realistic job previews and smart application blanks. Realistic job previews are gut-honest descriptions (as opposed to sweet-sounding invitations) of what the job is like. They can be video, audio, or written, but always should include things the employee will encounter such as work rules, organizational climate, and job-related competencies. Their purpose is to let people know what they will be selected for and what the job will be like. If the applicant does not like the RJP, they can opt-out early in the process; about 5% usually do.

I once worked with an organization that used an automated phone screen. The only problem was applicants were hanging up before finishing. We implemented a brief RJP so candidates could listen to descriptions of different jobs and apply for the ones they liked (or at least, did not dislike). Completed application calls immediately became the norm. The biggest problem with developing an RJP is managers who say, “OMG! We can’t tell them that!” My reply is, “So, you would rather surprise them?” RJPs are very effective ways to reduce early turnover and screen out blatantly disinterested applicants, but, not as good as our next tool: smart application blanks.

Get Smart

Smart application blanks are a specific type of test that contains items known (i.e., proven) to predict an aspect of job performance such as early turnover, job-fit, or job-attitude. They work on the premise that “highly successful” employees have specific things in common … and, this next part is critical … things that unsuccessful employees do not. SAB’s usually produce a three-box test: 1) probability the candidate fits the good profile; 2) probability the candidate fits the bad profile; or, 3) not really sure. Don’t get tricked into the idea of testing a candidate against a group average. That is bad science.

Now, let’s look deeper at what’s performance and what’s not; what’s different about employees compared to applicants; and, what kinds of things an SAB should measure.

Defining performance is slippery. Supervisor ratings are usually filled with personal bias; performance appraisal forms are usually one-size-fits none; and, completing tasks are usually combinations of many different skills at many different times. Accuracy depends on using performance numbers that are clear and as close to the employee as possible.

There is another problem when we use employees as our base. It’s called restriction of range. Think of RR as determining golf-skill differences between the best and the worst members of the PGA. In other words, the difference between the worst and the best PGA golfer is considerably less than the difference between the worst and the best tryout. We have to be very careful when we have restriction of range. Otherwise, we risk finding differences that are untrue and misleading (not a good thing if we plan to use the results for hiring).

Article Continues Below

Once we have determined clear performance standards and identified critical job elements, our next task is to develop a magic formula that ties them all together. There is seldom a straight-line relationship between fit-skills-attitude and performance. Sometimes they have different weights, sometimes they combine together, and sometimes the formula changes depending on what we want to predict. This is where I usually drag out the trusty old artificial intelligence engine.

Artificial intelligence analysis basically requires a very fast, very powerful computer. It first gives every element a value of 1,” tests a basic formula using performance as the goal, and calculates the error between its predicted value and the actual value. Then it re-combines elements and re-tests the equation a bazillion times. The AI engine stops when it finds the best fit (i.e., least error) between the prediction and the performance measure. Since AI is a force-fit process, accuracy depends on doing a considerable amount of homework, specifically, having clear relationships between fit-skill-attitude and performance, having plenty of subjects in the study, and knowing when to quit.

The final product is an innocent-looking short test that accurately predicts things like turnover, new accounts, customer service ratings, and so forth. Naturally, when you have a large pool and the luxury of picking and choosing applicants, this kind of data can be very helpful. For example, if your goal is reducing turnover, picking among applicants with an 80% predicted retention score will yield better results than picking applicants with a 60% predicted retention score.

In conclusion, every method used to separate qualified from unqualified applicants is a test. And every kind of testing contains patches of quicksand. Garden variety interviews might satisfy an inner need to get to know the applicant, but validated tests and RJP’s actually deliver results you can measure. Reducing turnover and increasing individual job performance using RJP’s and SAB’s could be the single most effective organizational intervention you could make.

  • Mike Salet

    Dr. Williams,
    I found this article interesting. You are right, most candidate selection processes are, at best, ‘hit and miss’ propositions.

    I didn’t feel that the methods you discussed using were detailed clearly enough to be useable.

    Since there were no concrete tools recommended, I expect that your next suggestion would be for a company’s internal recruiting or HR management to hire you, or someone similar, to define the standardized candidate selection processes.

    In my opinion, this article doesn’t address the essence of effective candidate selection and the major problem which most HR groups face – broad-based, lack of established credibility and relationships between HR and the company’s hiring managers. In other words, HR generalists being asked to find specifically skilled candidates.

    Before a quality candidate selection tool can be used effectively HR must be able to communicate with the hiring managers to clearly define what the qualified applicant looks like and gather the screening criteria to be used by the selection tool. Unfortunately, managers normally don’t find very well informed HR resources who already understand their basics requirements and who only need to be educated about any unique hiring criteria.

    It’s been my experience that the best internal recruiters are those who have worked in their company’s field (in the trenches) positions and have then been selected to perform related recruiting functions.

    Giving an experienced recruiter, with field related previous experience, a quality, flexible and re-usable candidate selection tool is my idea of a practical solution.

    Thank you for listening, I look forward to a reply.

    Regards,
    Michael Salet

  • Dave Pollock

    The second paragraph lost me… and rather quickly. Screening by use of minimum qualifications – Yes, “sitting down, making a list of questions, and adding up the answers…” can reduce applicant flow by significant amounts, depending upon job class.

    In general, the lower the job class, the higher the volume and therefore, the more valuable a minimum qualification screening tool becomes. The higher the job class, the less likely I am to even want to screen since experience often trumps education at these levels. And this is the level where minimum qualifications are most often waived for the right applicant.

    If you really think you can manage applicant flow by “look[ing] for tests that predict elements of job fit, job skills, and job attitude…” good luck with that. Just how would you stem the flow if they’ve all got to be tested anyway?

  • Pingback: Find Criminal Background on Anyone, Background Searches Fas | Free Government Records Search()

  • http://www.ScientificSelection.com Dr. Wendell Williams

    Mike…You are correct…these tools are not for the external recruiter…They take considerable time and effort to develop, validate and install. In addition, managers and recruiters alike seldom have an operating language to describe human performance. Sure, they can talk about results, but they get lost when defining the candidate skills (etc.) to achieve them. This hampers the ability to define, communicate and evaluate applicant skills.

    Dave…everyone does not need to be tested (i.e., standardized tests) any more than everyone does not need to be interviewed (i.e., verbal test). Testing should be a series of pass/advance steps. My comment about the kinds of tests to look for is a recommendation to avoid any test that is not directly job related. As to experience…yes, it is another important factor to consider…but, taken alone, OTJ experience leaves a significant amount of information undiscovered…that’s what additional testing (both verbal and standardized) is for.

  • Dave Pollock

    Dr. Williams,

    Thanks for clarifying.

    Perhaps it’s a semantics issue with the title of the article. When I see “Managing Applicant Flow” I assume volume is the variable to be managed. That said, too many or too few applicants becomes the target of the management effort. Given the recommended use of testing (of whatever sort) I imagined either a log-jam of applicants waiting to be tested or a pool of too few people to make testing discriminating enough to be of value.

    IMHO “Two Tools for Managing Progress Through Your Employment Process” might be a bit more illustrative. In other words, these tools are to be used for people who pass whatever wheat/chaff process you’ve initially used.

  • Mike Salet

    Dr. Williams,
    My comment concerned only company internal HR and recruiting group circumstances.
    Again, my point was that your client companies can implement all the referral and performance selection processes you recommend BUT without the internal recruiter’s ability to clearly understand the hiring manager’s selection criteria and then define/describe the requirements clearly for internal evaluation purposes, any selection related processes being used will not work effectively.
    20+ years of experience watching futile hiring processes “hit and miss” and evaluating the tremendous cost of making bad hires is the basis of my opinion.

  • http://hrtests.blogspot.com Bryan Baldwin

    Two questions:

    1) Do you have a source for the 5% figure of applicants dropping out b/c of RJPs? Just curious, I’d be interested in research on that.

    2) Do you have an example you can point us to of what a smart application blank looks like?

    Thanks!

  • http://www.ScientificSelection.com Dr. Wendell Williams

    Hi Bryan…the 5% figure comes from analyzing multiple client experiences with RJP. I also remember some research numbers being published a few year’s back, but the citation eludes me.

    Think of SAB as combinations of BEI and Bio-data items that are highly correlated with one or more specific criterion measures . The ones I have built vary considerably with the position, with the client, and are confidential. However, here is a simple example..

    A temporary placement client of mine was using a VRS to screen candidates for call center jobs. They were frustrated because a majority of candidates seldom completed the call and the ones who did often failed to meet qualifications. When we isolated items associated with various job criteria, we redesigned the system to become a SAB and developed a weighted scoring system to rank-order applicants. The results included cutting the screening time in half, doubling the completion rate, and screening-out more unqualified candidates.

    As to the S&H meta analysis and its subsequent offspring…In my experience, people who quote these numbers act as if they were Newtons’ 3 laws of motion. They seldom understand a meta-analysis is not actual data, but statistically adjusted numbers that do not actually apply to a specific study. From a practical standpoint, I have seen the closer the assessment is to the job, the more accurate it becomes. For example, there are some very good reasons why astronauts, military, police, and pilots are evaluated using job-simulators and not BEIs. However, if one accepted the most current meta-analysis as hard-fact, we could retire all that hardware and save taxpayer money. Not a good idea.

  • Carl Greenberg

    The biggest problem with defending and promoting quality assessments in the marketplace is applying a macro solution when the market is evaluating it on a micro level. That is, tests and assessments are developed based on large sample statistical models, which basically say that on average they will correctly identify people who will be successful in a job. But end-users oftentimes evaluate these tests on a case by case basis– that a particular person was not predicted correctly. This single or small sample result is then overly weighted in the biased, negative perception of the end-user. Many analogies apply here — e.g., using macro economic data to select a wining stock, using climate data to predict tomorrow’s weather in your neighborhood.

    Test and assessments really pay off when they are applied on large number of hires (usually over months or years.) Quality tests will then prove to help make better hiring decisions than those constructed poorly or measure irrelevant job requirements.