A.I. hiring software faces a regulatory reckoning

November 23, 2021, 11:53 PM UTC

Legislation recently passed in New York City that would regulate using artificial intelligence in the hiring process could have major repercussions nationwide. 

In early November, New York’s city council passed a bill that would require companies selling A.I.-powered hiring software to conduct third-party audits of their technology to ensure it doesn’t discriminate against women and people of color, among other problems involving bias. 

Customers using the software would also be required to notify New York City job recruits about any use of A.I. in hiring and to disclose the kinds of personal information used to help the technology make decisions.

New York’s mayor Bill de Blasio, who has previously supported the bill, has yet to sign it.

Julia Stoyanovich, an associate professor of computer science at New York University and a founding director of the school’s Center for Responsible AI, tells Fortune that the bill is a “big deal” because it represents the first attempt by any U.S. government to create regulations over “automated decision systems and hiring.” “There is a really tremendous need to be regulating the use of these tools, because whether you know it or not, essentially anybody who is on the job market is going to be screened by these tools at some point,” she says. 

Currently, little is known about how widely used A.I. hiring tools are. “At least in New York City, it will give us that information,” Stoyanovich says.

Last year, Illinois enacted a bill focused solely on the use of A.I.-powered video analysis software in hiring. Such technology can often attempt to read people’s personality traits by analyzing their facial movements, voices, or choice of words under the theory that it helps companies better find recruits who fit their corporate cultures.  

There is no current federal legislation regarding A.I. and hiring. However, the Equal Employment Opportunity Commission recently began an initiative to ensure that A.I. hiring tools comply with federal civil rights laws.

Stoyanovich supports using A.I. hiring tools for quickly filtering enormous numbers of job applications. Advances in natural language processing could lead to better resume-screening, such as identifying certain skills in recruits even though they didn’t list those skills on their resumes. For instance, technology could learn to associate military experience with having “crisis management skills,” she notes.

Stoyanovich sees problems, however, with companies using certain A.I. tools in hiring, such as video analysis during job interviews. She questions the tech’s capabilities, saying, “We don’t know if it’s predictive of performance on the job.”

Stoyanovich is encouraged that companies selling A.I. hiring software will, if the New York bill is enacted, be required to disclose more about their technology. If anything, it would provide more transparency to business customers about the hiring software they use versus the claims that the tech’s developers make. “I think we should also just stop and wonder whether these tools work at all,” Stoyanovich says, noting that the software may be less sophisticated than vendors claim. “Are they just giving us self-fulfilling prophecies and random noise and yet somebody is calling them A.I. so that they can charge a million bucks?”

Stoyanovich wishes New York City’s bill would have required more thorough audits, which could expose potential bias problems involving job applicants who are elderly or have disabilities. But she says that people must work together to strengthen any law enacted over time. “I think if we just kind of idly and passively sit by with this law on the books, not much will change,” Stoyanovich says.

Jonathan Vanian 
@JonathanVanian
jonathan.vanian@fortune.com

A.I. IN THE NEWS

A.I.’s security problems. Japanese tech conglomerate Fujitsu and Israel's Ben-Gurion University signed a three-year partnership to create a research center focusing on countering security threats to A.I. and machine-learning systems. “Hostile entities use increasingly sophisticated techniques to threaten critical infrastructure and systems by stealing and leaking confidential information contained in AI datasets,” the groups said. 

Here come the delivery robots. DoorDash revealed details about its DoorDash Labs research unit that is intended to help the online-delivery company experiment with robotics and related automation technologies. In one experiment dubbed the “hub-to-hub model,” DoorDash said it “would aggregate orders in an area with high merchant density, like a shopping mall or DashMart, and have a robot ferry the order to a consumer hub–thus moving the Dasher pickup location closer to the delivery point.”

A.I. meets healthcare. The University of Pittsburgh Medical Center’s UPMC Presbyterian hospital is using machine learning software to monitor and track the spread of infections at the facility, according to Wall Street Journal report. The technology is based on more basic machine-learning techniques rather than neural networks, the cutting-edge software designed to loosely mimic how the human brain learns. A Carnegie Mellon professor involved with the system explained that because the system doesn't use neural networks, it's easier for healthcare experts to understand how it makes its decisions, the report said.

The problem of predicting crimes. A startup called Voyager Labs has created software that analyzes social media postings on Facebook and Instagram to help law enforcement predict if certain people have committed or are planning to commit crimes, according to a critical report by The Guardian. The report probes the startup’s “ethically questionable strategies to access user information, including enabling police to use fake personas to gain access to groups or private social media profiles,” among other concerns. A Voyager spokesperson told the publication that the company will “follow the laws of all the countries in which we do business.”

EYE ON A.I. TALENT

Microsoft reorganized its cloud and A.I. business unit, making Azure executive vice president Jason Zander the head of a new team, called Strategic Missions and Technologies, according to a report by tech publication ZDNet. Zander will report to Microsoft CEO Satya Nadella. 

Guard Dog Solutions hired Alexander Morrise to be the security firm’s chief data scientist. Morrise was previously the chief technology officer and cofounder of hospitality firm Stay Open.

EYE ON A.I. RESEARCH

Sound the language model alarms. A group of researchers published a policy document on behalf of the Association for Computational Linguistics scientific group voicing some concerns with the amount of energy required to train large language models, which have grown in popularity in recent years due to their ability to generate more realistic text.

The researchers, who hail from universities including Stanford and Carnegie Mellon, and companies like Amazon and Facebook, said their report is intended to identify ways to “mitigate” some of the environmental concerns that occur with training large language models.

From the paper:

To address these concerns, we recommend putting more thought into the conditions in which an expensive experiment is required, by increasing the alignment between experiments and research hypotheses. Our goal is for both authors and reviewers to justify the link between the experiments run (and those that were not run) and the research questions in the relevant paper. We would like to encourage researchers to think about whether certain experiments are necessary or not, which will then lead to lower energy cost, to a more inclusive environment, and to increased scientific rigour.

FORTUNE ON A.I.

Tips to build a diverse A.I. team—By Jonathan Vanian 

Apple presses the gas on its self-driving car plans—By Jacob Carpenter

Walmart is getting closer to delivering cough drops 50 miles away by drone—By Jessica Mathews

Medical professionals consider A.I. to address chronic health conditions in the midst of COVID-19—By Kylie Logan

BRAIN FOOD

The A.I. Criterion Collection. Acclaimed filmmaker John Carpenter dislikes superhero movies, while director Alex Cox thinks that “they’re fun.” The critically acclaimed directors’ opinions on the superhero movie genre were created by an A.I.-powered video game called AI Dungeon that auto-generates human-like text when users submit prompts.

Adi Robertson, a journalist for the tech publication The Verge, shared her amusing anecdote on Twitter involving AI Dungeon. Robertson created a “scene” in the game that would cause the app—powered by the GTP-3 language software—to generate dialogue mimicking what it believed certain acclaimed filmmakers think about the superhero genre. The results were predictably silly.

Here’s the A.I.-version of Cox expanding his thoughts on the genre: “They’re like spectacles. You take something as boring as a superpowered, invulnerable man and then you add some fantastical elements to the mix and you get…a story.”

“If you want to call it a story, that is,” the A.I. director added. “I don’t think movies like this qualify as stories.”

Our mission to make business better is fueled by readers like you. To enjoy unlimited access to our journalism, subscribe today.

Read More

CEO DailyCFO DailyBroadsheetData SheetTerm Sheet