A few years ago, Jason Freeman was confronted by a classic hiring challenge. The 10-person startup he had founded, an online commercial real estate service called 42Floors.com, was growing and needed to rapidly staff up. Suddenly, it seemed, Freedman, who had myriad other duties as CEO, was spending hours at a time sifting through towering stacks of résumés. He was overwhelmed.
The solution appeared in the form of artificial intelligence software from a young company called Interviewed. It speeds the vetting process by providing online simulations of what applicants might do on their first day as an employee. The software does much more than grade multiple-choice questions. It can capture not only so-called book knowledge but also more intangible human qualities. It uses natural-language processing and machine learning to construct a psychological profile that predicts whether a person will fit a company’s culture. That includes assessing which words he or she favors—a penchant for using “please” and “thank you,” for example, shows empathy and a possible disposition for working with customers—and measuring how well the applicant can juggle conversations and still pay attention to detail. “We can look at 4,000 candidates and within a few days whittle it down to the top 2% to 3%,” claims Freedman, whose company now employs 45 people. “Forty-eight hours later, we’ve hired someone.” It’s not perfect, he says, but it’s faster and better than the human way.
It isn’t just startups using such software; corporate behemoths are implementing it too. Artificial intelligence has come to hiring.
Predictive algorithms and machine learning are fast emerging as tools to identify the best candidates. Companies are using AI to assess human qualities, drawing on research to analyze everything from word choice and microgestures to psycho-emotional traits and the tone of social media posts. The software tends to be used in the earlier part of the process, when companies are narrowing a pool of applicants, rather than in the later stages, when employers place a premium on face-to-face interaction and human judgment.
A wave of startups is offering a profusion of services. San Francisco–based Entelo mines the Internet and social profiles to predict which applicants are likely to switch jobs. Another California startup, Talent Sonar, offers machine-learning algorithms that write job descriptions aimed at improving gender diversity; the software even hides applicants’ names, gender, and personal identifiers in hopes of overcoming the unconscious biases of hiring managers. Utah-based HireVue uses video interviews to examine candidates’ word choice, voice inflection, and microgestures for subtle clues, such as whether their facial expressions contradict their words.
Google has also entered the hiring software fray. Last fall, it released a new program called Cloud Jobs to some customers. Behemoths such as Johnson & Johnson and FedEx use it on their job-listings sites to communicate better with potential applicants. To build its software, Google scanned millions of job openings to uncover connections between certain attributes and job performance, then applied analytics and machine-learning models. In theory that allows J&J’s career page to present search results more likely to match the intent of job seekers. The software also makes J&J’s postings more visible to people doing searches on the Internet.
AI for hiring is “hot, and it’s competitive,” says Josh Bersin, principal at Bersin by Deloitte, the HR arm of the consulting giant. Some 75 startups are now scrambling for a piece of the $100 billion HR assessment market. “I get emails every day from someone who decides they’re going to fix the recruiting market through artificial intelligence,” Bersin says. Can algorithms learn to probe one of the most mysterious of all human endeavors—matching a person to a job—better than actual humans can? And will solving some old problems end up creating new ones?
Five Insights on Hiring That AI Is Building on
Forget About Grades
GPAs and test scores are worthless as criteria, according to research Google did on its own hiring. It found that the proportion of people without any college education at Google has increased over time, and up to 14% of those on some teams never went to college.
Grit Matters More Than IQ
University of Pennsylvania professor Angela Duckworth studied military cadets, rookie teachers in tough neighborhoods, and new salespeople to determine who would endure and succeed. The common thread wasn’t IQ, social intelligence, looks, or health. It was passion and persistence.
Experience Isn’t Everything
A study by the American Association of Inside Sales Professionals and AI startup Koru concluded that experience didn’t predict sales success. Another found that grads with mid-level roles in extracurriculars outperformed club presidents, because companies need team players more than stars.
Their Star May Not Be Your Star
A person who excelled at a rival may fizzle at your firm. Some 75% of Koru’s predictors vary even among similar roles at similar companies. At one, the number of hours worked in college might be a predictor, while taking psychology courses, an indicator of teamwork, is a predictor at another. The match is crucial.
Ignore that Facebook Photo
A study by AI company Fama found that pictures of drinking on social accounts don’t imply bad job performance. Such photos are so common that screening for them means eliminating huge swaths of people. By contrast, bigoted comments or posts about drugs were linked to subpar performance.
People prefer to make judgments about other people, of course. But it turns out they’re not very good at it. Yale School of Management professor Jason Dana, who has studied hiring for years, recently made waves with a high-profile article in the New York Times that excoriated job interviews as useless. “They can be harmful,” Dana wrote, “undercutting the impact of other, more valuable information about interviewees.” Among other things, he noted the tendency by hiring managers to turn impressions from a conversation into a coherent—but often incorrect—narrative.
A Google veteran agrees. “Most interviews are a waste of time because 99.4% of the time is spent trying to confirm whatever impression the interviewer formed in the first 10 seconds,” says Laszlo Bock, the company’s former HR chief. Bock authored the book Work Rules! after revamping the company’s hiring strategy.
Google began reviewing its approach in 2008. In its early years, the company had recruited from elite schools like Stanford and MIT. But when Google examined its internal evidence, it found that grades, test scores, and a school’s pedigree weren’t a good predictor of job success. A significant number of executives had graduated from state schools or hadn’t completed college at all.
This led Google to rethink how it hires and set it on a path to using algorithms that help identify the traits that its research shows are actually relevant: cognitive abilities, intellectual humility, and the ability to learn. Google created a program called qDroid, which drafts questions for interviewers based on how qDroid parses the data the applicant provided on the qualities Google emphasizes.
Data is crucial here. It would be hard to imagine the proliferation of AI for hiring without the dramatic increase in job-related information. Not that long ago, companies would receive a paper résumé, and software would scan it to compare skills and experience and give it a score. But LinkedIn changed that, offering troves of résumés presented with extensive information about a person’s relationships. AI’s strength is the ability to comb through such data, examine multiple variables, and find patterns that humans might not see.
Most of the software available today doesn’t use the kind of AI that eventually starts thinking on its own. It’s what’s called “supervised” learning: HR managers and data scientists together may establish and tweak variables that should be weighted based on qualities of high performers.
The software is far from foolproof. People can program their biases into algorithms. Says Bock: “Trying to understand people using computers is far more complicated than trying to understand transactions or commerce.”
Many of the AI startups specialize in what you’d expect: using computing power to process huge quantities of data. One startup, Fama, automates the analysis of a job candidate’s identity, seeking online clues about her character or world view. Ben Mones says he started the Los Angeles–based company after hiring a man who seemed great on paper and in an interview but turned out to be a misogynist and racist. Mones says he would have known that if he had seen the man’s social media posts. But doing that kind of searching has the potential for bias—and legal risk.
Scanning a candidate’s social media for information about race, religion, sexual orientation, or political affiliation is illegal and can spark complaints of hiring discrimination. “It’s hard to unring the bell and prove that you didn’t use that information in an employment decision,” says Pamela Devata, a partner at employment law firm Seyfarth Shaw. “The Equal Employment Opportunity Commission assumes that if you accessed it, you used it.”
Mones decided that AI is the only solution to the problem. It can quickly mine thousands of social media posts and web articles and analyze them while shielding employers from liability. But executing that meant teaching computers to read text, image, and video just like a person. Says Mones, “That’s tough AI to build.”
Fama constructed its data set by asking tens of thousands of students to label the same set of text, photos, and videos. It evolved methods to get groups of people to agree that a certain post reflected, for example, bigotry. Fama then trained the algorithm to identify those traits in other posts. The software uses natural-language processing and image recognition to read text, images, and video like a person. It combs through seven years of data and uses comparative analytics, similar to Amazon’s “customers who bought this also bought” feature, so users can see how candidates stack up.
Mones says HR managers still hesitate to sign up, given the legal uncertainty and questions about privacy: “There is a healthy fear out there.” Indeed, for job seekers, the notion that an algorithm that they don’t know exists—based on input from strangers they will never meet—is making judgments about their character might sound like a high-tech version of being subject to the whims of human bias.
Five Ways Companies Are Using AI Outside of Hiring
U.K. grocery chain Morrisons is using AI from Blue Yonder of Germany to tailor daily prices for each product at each store as well as refill inventory based on advertising, weather, and holidays.
K&L Gates uses AI from ROSS Intelligence that combines machine learning, natural-language processing, and IBM Watson tech to process millions of pages, understand a query’s context, and draft a memo on its findings.
Progressive Insurance, Wells Fargo, and Hilton Hotels employ AI that analyzes callers’ tone, tempo, keywords, and grammar to route calls to agents with appropriate skills. The software, from Mattersight, reduces call time by 23%.
TripAdvisor uses software from Flyr that lets customers lock in prices for two to seven days before booking. And Thomson, the U.K.-based seller of travel packages, offers an AI travel assistant powered by Watson.
Employers are increasingly deploying AI to intuit subtler issues, including whether an applicant will mesh with a company’s culture or stay with the organization for a significant time. Adidas, HealthSouth, Keurig, and Reebok use an AI service called SkillSurvey. It predicts individuals’ turnover and performance based on words used by the people listed as references, who are presented with an online series of behavioral-science-based questions tailored to the specific job. The input is then graded and averaged. The results can be compared with a database of thousands of candidates for the same position—providing insight into how the candidate compares with others. HealthSouth, which employs 24,000 people, reported a 17% decrease in employee terminations, a 10% drop in people quitting, and 92% less time spent checking references after one year of using SkillSurvey.
Citigroup is using AI to predict which new college grads to hire as investment bankers. The company wants to ensure diversity and make sure the new crop fits its culture and stays with the company. “We needed a more efficient and more effective screening process,” says Courtney Storz, the banking giant’s head of global campus recruitment.
Citigroup is rolling out software from a Seattle startup called Koru. It’s a two-step process. Koru first seeks to decode Citigroup’s culture and the traits of existing employees using a 20-minute survey. Then hiring managers work with Koru to come up with a separate survey for job candidates that looks for key characteristics that would increase the likelihood of a good match.
Former McKinsey consultant Josh Jarrett and tech entrepreneur Kristen Hamilton started Koru four years ago. After culling dozens of research studies on predictors of success, the duo launched Koru’s predictive analytics software 15 months ago. The software focuses on candidates in the first seven years of their career because recruiters have little to assess other than grades and the prestige of their college, Jarrett says. “GPA is easy for humans to grab onto and understand and assign too much weight to,” he says. “But AI can look across variables, see patterns in between the data.”
Those variables, he says, can reveal telltale signs of crucial qualities, such as persistence. Jarrett says the software uses algorithms that search for signs of grit in past behavior. It’s less a matter of any individual sign than the accumulation of them. Maybe a candidate was on the volleyball team. But what really matters is how long the person persisted—while, say, holding down a full-time job—as well as the leadership role she attained and the solo projects she completed. The software can suggest follow-up interview questions that let employers dig deeper.
Koru’s software can also identify a company’s past tendencies, such as a history of recruiting from certain colleges, and can adjust to look more broadly. The more data the AI software collects on hiring, retention, and performance, the more it learns.
Some AI programs are now venturing into the most intangible qualities: emotions that a job candidate herself may be unaware of. HireVue, for example, uses its algorithm to assess applicants’ video interviews. Data scientists teach the software to spot tens of thousands of hints about intents, habits, personality, and qualities. The software assesses whether a candidate uses active verbs, such as “can” and “will” or relies on negative words like “can’t” or “have to.” It also checks for voice inflections and thousands of microexpressions that convey a range of emotion. The latter are based on a taxonomy by renowned psychologist Paul Ekman, who created an “atlas of emotions” with 10,000 facial expressions, which can flit by in 1/25 of a second. It’s easier for software to identify and correlate the emotions than it is for humans.
HireVue uses a two-part process. The client company will record hundreds of job interviews and then chart the performance and retention of those who are hired. The software looks for links between traits found in those interviews and the eventual job performance. It aims to predict, say, whether a person will stay in a call-center job for more than two months or perhaps spot animosity toward past employers. “A person may say they loved their boss, but when they say the word ‘boss,’ a flash of contempt crosses their face,” says Loren Larsen, HireVue’s CTO. A single frame catches a sneer on one side of their face. That expression is tallied along with thousands of others.
AI that powerful sounds impressive—and a bit chilling. Among other things, it suggests that something as personal as psychotherapy could someday be handed over to an algorithm. Still, these are very early days. “No one has found the magic bullet yet,” says Bersin of Deloitte. If someone even comes close to that, the payoff could be huge.
A version of this article appears in the June 1, 2017 issue of Fortune with the headline “Where Does the Algorithm See You in 10 Years?” We’ve included affiliate links in this article. Click here to see what those are.