What if you knew when a job candidate would quit before you offered them the job? While the tech’s yet to predict the time new hires will spend at a job down to the minute, two startups are selling pattern matching that helps employers predict whether prospective employees will make it longer than a year. If the tech works, it could save employers billions. But is the payoff worth it?
First, there’s Pymetrics, a data science startup that runs job applicants through a series of 12 online games. But “they’re not really just games,” CEO and co-founder Dr. Frida Polli explains. “They are scientific exercises that have been developed by the cognitive neuroscience community globally to look at different cognitive and emotional traits.”
People applying for positions at Accenture or Unilever, Pymetrics’ lead clients, play solitaire online, stack virtual rings in piles, and participate in other exercises designed to decipher the inner workings of an applicant’s brain: Are they an altruistic person? Do they work more quickly or more slowly when dealing with distraction? The technology measures more than 90 cognitive, emotional, and social traits, all of which, Polli says, “come to the surface for different roles at different companies.”
Once the tests have gauged how a candidate’s mind works, Pymetrics’ predictive modeling goes to work. “For every client we work with, we have their top performing individuals go through the games and that’s how we establish the traits that are important for good performance in that role at that company,” Polli explains. Data science pattern-matches candidates’ results against the data set from successful employees, and bam: Pymetrics provides a percentage-based match based on how likely the test-taker is to remain in that job for more than a year.
What’s successful, Polli says, is determined by each client, but the company encourages employers to use quantifiable metrics, like the number of quarters where goal was met for a sales job. Ranging from one to 100 percent, matches are categorised as strong, medium, or low. A medium or low match indicates an applicant won’t mesh well and will quit.
AI retention predictions: A question of accuracy
But Danny Nelms, president at Work Institute, a workforce research company, says predictions like these are problematic: “75 percent [of turnover] is more controllable versus less controllable,” he says, explaining how internal, company-driven factors are most often why employees leave. Some, like substandard pay, can be controlled on a business-wide level, but other times, great employees are simply paired with bad bosses who push them to leave.
Take a workplace with 40 different managers, Nelms suggests: “Well, all 40 of those managers manage slightly different, right? Am I going to get my AI to be so specific to be able to understand exactly how this person wants to be managed and that’s exactly the person they’re going to be managed by?”
Pymetrics doesn’t look at matches on an individual team level, but does address that the job with the same title can be radically different from one company to the next. Take sales, for example: Some businesses need aggressive hunters while others seek relationship builders. Polli says, “Maybe 50 years ago, jobs were more similar across different companies. Potentially, the world was less complex. But I think nowadays there’s just so much variability in what someone would call any given role that I think it’s hard to just say, ‘Oh, look for these three things and you’re all set.’”
But looking for key traits is exactly what her company does. When this reporter took Pymetrics’ tests, I scored high in “risk preference for high risks,” “risk preference for low risks,” and “planning speed.” My results listed these as negative traits for an entrepreneur, predicting I only had a six percent chance of making it as one for more than a year: I’m a two-time tech founder who sold her first company for a multiple of revenue after running it nine years. And speaking as a tech founder, these so-called negative traits helped me do my job.
Questions of accuracy aside, the use case for AI retention predictions remains compelling. Work Institute reports that turnover costs American companies $536 billion a year with 34 percent of employees leaving during their first year. It costs “about 33 percent of base pay to lose an employee,” Nelms says, explaining there are recruitment costs and training costs, plus the productivity hit from not having someone in the job.
The predictive power of references
Of the 12 cognitive tests Pymetrics runs applicants through, none measure actual job talents. For pattern matching that is performance-related, there’s SkillSurvey, a Philadelphia-based company that analyses feedback from provided references.
This feedback, according to President and CEO Ray Bixler, comes in two forms. First, references rate 30 workplace behaviours on a scale of one to seven. The company also collects verbatims, which he describes as “a free field area that allows the reference to further explain … three areas of strength and three areas of development.” SkillSurvey combines verbatims with ratings to better understand them, then pattern-matches ratings against others who applied for similar work.
“The No. 1 most important data set to look at that predicts turnover more than any other time and again is reference response rate,” Bixler says. “The data conclusively continues to come back in that how the references rate the applicant will in fact help reduce turnover and predict performance one year later.” The average SkillSurvey client sees a 35 percent reduction in turnover.
For pattern matching to predict this, though, you have to use good patterns. At DocuSign, Senior Director of Recruiting Susan Ross says references must respond to SkillSurvey within two days or an applicant won’t get the job. Later than that, and internal matching shows they’re not the best hire. Never mind that there are lots of reasons references might not respond that quickly that have nothing to do with the candidate. This timeline is DocuSign specific, though, and Ross says it and other matching works: After deploying SkillSurvey internationally, DocuSign opted not to hire 11 individuals whose reference results were poor, saving more than $1 million in possible turnover costs.
Back at Pymetrics, Polli says one-year attrition improves by 14 to 60 percent after deployment. “We’ve also seen a correlation with or an association with increased job performance metrics like increased sales,” she shares.
For applicants who are used to overcoming the odds, pattern matching might not be a good thing. And in not hiring people who are able to rise above their statistical chances, employers miss the insights they would have supplied. But that doesn’t make turnover less costly — or the need to manage it less real. The true solution is in how individual recruiters use tech predictions.