Artificial IntelligenceCryptocurrencyMetaverseCybersecurityTech Forward

Can an A.I. algorithm help end unfair lending? This company says yes

October 20, 2020, 1:00 PM UTC

Jenny Vipperman, the chief lending officer at VyStar Credit Union, among the largest U.S. credit unions, wanted to lend more money to minorities and other disadvantaged groups that historically have struggled to obtain credit. But finding a way to lend more inclusively without taking on potentially dangerous amounts of risk was difficult using standard credit scoring and the tried-and-true lending rules of thumb that VyStar had been relying on.

“We had been approaching lending as an art, not a science,” she says. “But it is 2020 now, and we have access to much better data. We should be able to leverage that data to make more scientific decisions.”

She heard about a Los Angeles–based startup called Zest AI that was using artificial intelligence to help banks and credit unions lend more inclusively and decided to give it a try. But she wasn’t expecting what happened after Zest trained its A.I. system on several years’ worth of VyStar’s lending records: The A.I. figured out how to increase VyStar’s approvals for credit cards by 22% while keeping VyStar’s risk constant.

“That is thousands of people who otherwise would not have had access to a credit card,” Vipperman says.

Results like this are helping Zest build a growing stable of customers among financial institutions both large and small. It says that on average it can increase loan approvals 20% with no additional risk and it can help banks reduce charge-offs—or debts that cannot be collected—by 50%.

Today the company announced that it has received $15 million in additional funding from Insight Partners, a New York venture capital firm. Zest’s valuation from the financing was not disclosed. The latest round brings the amount the company has raised since its founding in 2009 to more than $230 million. As part of the fundraising, Lonne Jaffe, an Insight managing director, will be joining Zest’s board.

Zest’s previous backers have included Baidu, the Chinese search giant, as well as venture capital firms including Lightspeed Venture Partners, Matrix Partners, and Upfront Ventures. It also received $150 million in “venture debt” from investment management firm Fortress.

Deven Parekh, Insight’s managing partner, said the venture firm, which invests only in fast-growing software businesses aimed at serving large organizations, was drawn to Zest’s ability to help banks improve financial equality through A.I. and the way it could help banks manage A.I. systems over time.

This “model management” function—which involves not just creating an A.I. model but also running it on an ongoing basis, monitoring it to make sure the data it is being fed is not deviating substantially from the data it was trained on, and periodically retraining it—has become increasingly important to companies as the number of A.I. systems they use has begun to proliferate.

Zest was founded as ZestCash by Douglas Merrill, a former chief information officer at Google. It began life as a financial technology company that would underwrite short-term loans directly to consumers. But over time the company switched to selling fairer credit modeling software to other financial institutions and rebranded itself as Zest Finance. In October last year, Merrill departed the company, and it rebranded once again, to Zest AI.

Mike de Vere, who became Zest’s chief executive in December, says that the company is now solely focused on selling software to banks and credit unions to help them create and manage A.I. lending models. The company makes money by charging lenders a subscription to use its software.

The company is one of several startups using A.I. to improve lending. Others include and Upstart as well as efforts being tried inside banks themselves. Zest has found traction with some big banks, including Turkey’s Akbank, as well as work for Discover and France’s BNP Paribas.

One problem with using A.I. software, particularly in a highly regulated area like lending, is that some powerful A.I. models are also opaque. It can be difficult even for the data scientists who helped create them to understand exactly how they have arrived at any particular decision.

De Vere says Zest’s A.I. systems are far more transparent. “We have broken the black box and fully explain the machine-learning model,” he says. He says that Zest has gone through regulatory audits of its systems, including an audit of the model it helped build for a large U.S. private bank by the U.S. Treasury Department’s Office of the Comptroller of the Currency, and has been able to satisfy regulators that the correct reasons are being given for any adverse action—such a credit denial.

Vipperman says this transparency was critical for VyStar being comfortable with handing lending decisions to Zest’s models. “We have the ability to review those factors it is weighting most heavily in its decisions and make sure it is not learning something that we don’t want it to be learning,” she says.

Zest’s fairness and de-biasing system is based on a type of machine-learning technique called a generative adversarial network, or GAN, which is the same technology that makes deepfakes—highly believable photos and videos created by A.I. software—possible. GANs work by yoking two different neural networks—a kind of machine-learning software loosely based on how the human brain works—together: One network generates a model, and the second network acts as a “critic” that pushes the first network to improve.

In Zest’s case, the first network creates a lending model without having access to data about the applicants’ race or any information, such as post code or a person’s name, that can often serve as proxy for race. The second network does have access to the applicant’s race and computes how far away from perfect fairness the first network’s lending model is. It then feeds that difference back to the first model, prodding it to adjust how it weighs various pieces of data in order to create a fairer model.

Sean Kamkar, Zest’s head of data science, says the company’s GAN-based system can almost always find a way to improve fairness while keeping risk levels constant by making small adjustments in how different pieces of data are weighed. And some of Zest’s customers are willing to go further, sacrificing tiny amounts of potential profit for huge gains in inclusivity and fairness. De Vere says one auto lender it worked with found that by giving up just $2 in profit, or 0.125% of a loan with an average profit of $1,600, it was able to increase the number of auto loans it was offering to Black customers by 4%.

De Vere says Zest’s technique for improving the fairness of a lending model is far superior to the existing industry standard in banking, which is a technique called “drop one.” In this method, a financial institution takes its lending model and then simply eliminates—or drops—one of the variables from consideration completely and sees how the model performs on both risk and fairness. It then repeats this in turn for each of the variables in the model. Kamkar compares this to “taking a hammer” to the model, and not surprisingly, most models perform much worse when one variable is eliminated.

The problem, De Vere says, is that banks will often cynically use “drop one” analysis as an excuse to avoid altering their lending practices to be more inclusive. Most regulators will allow the “drop one” analysis as evidence that the bank has a valid business rationale for sticking with the existing model. De Vere says he has been trying to persuade lawmakers and regulators to compel financial institutions to move beyond “drop one.”

This story has been corrected. A previous version erroneously stated that Zest had worked with Citigroup instead of Discover.