A.I. and tackling the risk of “digital redlining”

February 11, 2020, 12:28 PM UTC

This is the web version of Eye on A.I., Fortune’s weekly newsletter on artificial intelligence and machine learning. To get it delivered weekly to your in-box, sign up here.

Last week, a Dutch court ordered the government in the Netherlands to stop using a machine-learning algorithm for detecting welfare fraud, citing human rights violations.

The system, called System Risk Indicator (SyRI) in English, was being used by four Dutch cities to spot individuals whose benefits applications should receive extra scrutiny. It gathered information from 17 different government data sources, including tax records, vehicle registrations and land registries.

But the cities using SyRI did not run every application through the system—they only deployed it in poor neighborhoods where many residents are immigrants, often from Muslim countries.

The court ruled that SyRI violated the “right to private life” enshrined in European human rights law. The application of SyRI, it said, could lead to discrimination against individuals based on their socio-economic status, ethnicity or religion. It also said SyRI did not seem consistent with the requirements of Europe’s stringent data privacy law, GDPR.

Although the judgment only came from a district court and is subject to possible appeal, the decision is likely to set an important precedent within European Union—and it ought to reverberate elsewhere too, as societies around the world come to grips with how to apply fairness in a world of A.I.-driven risk models.

Nowhere is this more relevant than in the insurance sector, which is turning to machine-learning algorithms more and more in order to improve underwriting. Last week, I had a fascinating conversation with Daniel Schreiber, the co-founder and CEO of the New York-based insurance startup Lemonade. He shares concerns that the increased use of machine-learning algorithms, if mishandled, could lead to “digital redlining,” as some consumer and privacy right advocates fear.

But done right—and with the right measure of fairness—he thinks machine learning has the potential to increase access to financial services and decrease cost.

To ensure that an A.I.-led underwriting process is fair, Schreiber promotes the use of a “uniform loss ratio.” If a company is engaging in fair underwriting practices, its loss ratio, or the amount it pays out in claims divided by the amount it collects in premiums, should be constant across race, gender, sexual orientation, religion and ethnicity.

He admits that this means it is entirely possible that some categories of people—Schreiber, who is Jewish, uses the example of Jews—could be charged more on average for property insurance, because, for instance, their religious practice involves lighting candles in the home for certain holidays, and lighting candles might be correlated with a higher risk of house fire.

But, he says, no individual should be charged more because he or she is Jewish. It might turn out that a particular customer isn’t religious and doesn’t light candles. That’s why it is important not to ask people about their religious affiliation—that would be discriminatory. The key is for the insurance company to gather data that actually equates to risk: Do you light candles in your home?

In order for it to work properly, insurance companies will need to gather more data about customers, not less. Right now, Schreiber admits, the regulatory winds seem to be blowing in the opposite direction (especially in Europe, as the SyRI case shows). Most insurance regulators don’t understand machine learning. “That creates a fear of the unknown,” he says. What’s more, scandals such as Cambridge Analytica make people reluctant to share more data.

But Schreiber says customers might be willing to share more information if the insurers were transparent about why they needed to collect this data, how it was being used, and that it might result in customers paying a lower premium.

I wasn’t entirely convinced by Schreiber’s argument. If insurers become that much better at pricing risk, won’t many more people simply become uninsurable? (This is what happens in health insurance if companies are allowed to cherry-pick customers, excluding those with pre-existing conditions.)

Also, won’t people who live in impoverished neighborhoods still be forced to pay more for coverage, even though they may have little choice over where they can afford to live? Many poorer areas have higher risk of crime and fire, leading to higher home insurance premiums. (In fact, U.S. law prohibits policies that have a “disparate impact” on a protected class of people, unless a company can prove a legitimate business necessity for the policy.)

Schreiber told me that governments could mandate charging those who live in wealthy areas or who have high household incomes slightly more in premiums, and then using this excess to subsidize the premiums of those who live in poorer neighborhoods. But, he said, this was a discussion separate from the one about whether the underwriting model itself is fair.

What do you think? Feel free to write in and let us know your views.

Jeremy Kahn 
@jeremyakahn
jeremy.kahn@fortune.com

A.I. in the news

More and more people are worried about being unfairly profiled by predictive algorithms. In addition to the SyRI example mentioned above, The New York Times examined governments' use of predictive algorithms in the U.S. and Europe where these systems are increasingly being used to advise on everything from parole and bail decisions to child services' selection of cases. It found growing alarm among community and civil rights groups. In many cases, those whose lives were impacted had no idea they had been assessed by a computer-driven statistical model. “You mean to tell me I’m dealing with all this because of a computer?” one Philadelphia parolee asked when a reporter told him for the first time that the conditions of his release were based on the fact that a machine-learning algorithm had judged him to be "high risk."

Twitter bans deepfakes. The social media company updated its policies to prohibit users from posting "synthetic or manipulated media that are likely to cause harm." The company is the latest to change its policies in response to concern over deepfakes, videos that are either manipulated using A.I. algorithms or entirely created by them. Twitter's policy also applies to still images and audio that has been manipulated or fabricated using a variety of other techniques, including over-dubbing. While there has been little evidence so far that deepfakes have been used for political disinformation, many security experts are concerned about their potential abuse, especially in the run-up to the 2020 U.S. presidential elections. 

Arm debuts two new A.I. chips. Arm, the U.K.-based semiconductor company now owned by Japan's SoftBank Group, unveiled two new computer chips designed to run A.I. applications. The new chips, called the Cortex-M55 and the Ethos U-55 NPU, extend machine-learning capabilities to small, relatively inexpensive electronic components, the company says, enabling applications in everything from healthcare to agriculture. Arm's new chips, which can be used separately or yoked together for better speed and computing power, are among a growing number of specialized components designed for "A.I. on the edge," meaning machine learning performed on a device itself without the need to communicate with a cloud-based datacenter.

Barnes & Noble, Penguin Random House cancel insensitive A.I.-generated "diversity editions" covers.  The bookseller and the publishing house canned a joint project they'd planned for Black History Month that would publish classic novels with new covers in which the main characters were depicted as non-white. The 12 books were selected for the project using an A.I. algorithm that analyzed the text of 100 famous novels, searching for cases in which the authors had not identified the race of the primary character. But critics accused the two companies of engaging in "literary blackface" and perpetuating the exclusion of diverse authors from the canon.

Facial recognition comes to schools. A New York school district has become the first in the country to install facial recognition technology and many others are considering doing so too, The New York Times reports. The schools say the technology will help them monitor who is on school property for student safety. But civil rights groups and some parents are not happy about the development. “Subjecting 5-year-olds to this technology will not make anyone safer, and we can’t allow invasive surveillance to become the norm in our public spaces,” Stefanie Coyle, deputy director of the Education Policy Center for the New York Civil Liberties Union, told the paper.

Clearview continues to court controversy

Some law enforcement agencies in the U.S. and Canada are hailing the New York-based startup's facial recognition technology, telling The New York Times that Clearview's technology has made it easier for them to locate the victims of child sexual exploitation. But, the paper says, Clearview's handling of such sensitive images raises questions about how the startup is safeguarding the information as well as concerns about how accurate its technology really is, since the consequences of a false match are particularly grave. Clearview has already been criticized for harvesting images from social media sites to train its A.I., sometimes in violation of those sites' terms and conditions, and also for potentially misrepresenting how accurate its technology is and which law enforcement agencies are using its app. Google, YouTube, LinkedIn, Twitter, Venmo and Facebook have all sent Clearview cease-and-desist letters, threatening to sue the firm if it doesn't stop using images gathered from their platforms.

Eye on A.I. talent

  • Cheryl Ingstad has been sworn in as the U.S. Department of Energy's first director of the Artificial Intelligence & Technology Office (AITO). The office was established in September 2019 to be the central coordinating body for the development and application of A.I. within the department. Previously, Ingstad led A.I. and machine learning research and development at the 3M Company. She had also held previous leadership roles within the Defense Intelligence Agency's Information Operations Branch.
  • Okta Inc., a San Francisco-based company that specializes in secure identification and access control systems, has hired Craig Weissman as Chief Architect. Previously, Weissman was the chief technology officer at Salesforce and had co-founded Duetto, which provides revenue management software for the hospitality industry.

Eye on A.I. research

Language models keep getting bigger—but to exactly what end?

Microsoft has unveiled the largest pre-trained language generation model to date. Its Turing Natural Language Generation model (T-NLG for short), announced this week, takes in 17 billion different parameters. This means it can encode the relationship between words and sentences over much longer stretches of text than previous models.

It is more than twice as big as the next largest language model, Nvidia's MegatronLM, which has 8.3 billion parameters, and eleven times larger than OpenAI's GPT-2, which, with its 1.5 billion parameters, helped spawn the race for ultra-massive language models. Microsoft says its new heavyweight champion is better at answering questions—such as search engine queries—succinctly and accurately. It says it can often do "zero-shot" question answering, since it is pre-trained on such a large amount of text and may have encountered the correct answer to a question in multiple different sources during that training. And the company says T-NLG can do better abstraction and summarization than previous language models.

All of these are important potential commercial uses of the technology. But, as I mentioned in the "Brain food" section of this newsletter two weeks ago, there's not much evidence that these ultra-massive language models actually "understand" anything the way a human does. Nor is it clear that, for all of its many more billions of parameters, T-NLG is that much better than GPT-2 or even Google's BERT, which only has 350 million parameters (and was considered enormous at the time it was released in 2018). GPT-2 was already so big that a lot of people who want to use it are struggling to do so—it is breaking servers, according to Caleb Kaiser in Towards Data Science.

Which brings us to what the real point of T-NLG may be: One gets the distinct impression from Microsoft's publicity push around this new massive language model that it was created simply to demonstrate Microsoft's own expertise at being able to train something that big. (Doing so requires coordinating parallel training across a lot of different processing chips.) In conjunction with T-NLG, the company unveiled a new open-source and free-to-use library of deep learning optimization tools called DeepSpeed. It includes a tool, called the Zero Redundancy Optimizer (or ZeRO for short), that the company used to train T-NLG and that it says can coordinate the training of models with up to 100 billion parameters.

Fortune on A.I.

Startup uses A.I. to identify molecules that could fight coronavirus—by Jeremy Kahn

Click here to oust the board— Inside the A.I. startup that’s transforming activist investing—by Adrian Croft

What you need to know about new IBM CEO Arvind Krishna—by David Z. Morris

Patient or prisoner? Governments deploy surveillance tech to track coronavirus victims—by Eamon Barrett

Brain food

One of the more interesting uses of today's computer vision algorithms may be in the restoration and enhancement of archival and classic film footage.

Last week, Denis Shiryaev, showed off what's possible. He used several publicly-available, neural network-based programs to transform one of cinema's most famous films—the Lumiere brothers' 1896 L’Arrivée d’un train en gare de La Ciotat (or in English, The Arrival of a Train at La Ciotat Station)—from its a slightly blurry and flickering original (the Lumiere's film camera only shot about 15 frames per second) to a ultra-high definition 4k, 60 frame per second version. The video, posted to YouTube, went viral. Shiryaev even added realistic sound effects to the originally silent film.

One journalist, Ars Technica's Timothy B. Lee, noted that commercially-available machine learning apps could also be used to colorize old film footage.

While the results are striking, and the technique suggests an interesting avenue to make classic films "come alive" for today's audiences, one has to be careful to distinguish between enhancement, which is what Shiryaev performed, and restoration. After all, what 1896 film-goers saw (and were reportedly terrorized by on first viewing) was not something with the sharpness and fluidity of 4k, 60 fps, but rather that slightly blurry and jerky camera-work.

The technique Shiryaev used, which is known as "upscaling," does not restore information missing from the original but rather invents new information not contained in the original and slots it into the vastly expanded pixel-space of modern ultra high-definition. Doing so can create strange visual artifacts—warping images or outlines that strangely melt away.

It would be possible to use a different, but similar, machine learning technique to actually restore old films and photographs—although here too, the algorithm is taking a best guess at what information is missing from the image based on the closest surrounding pixels it can analyze. If an image is badly deteriorated there is less certainty that the restoration produced by the A.I. will be accurate. 

 

Subscribe to Well Adjusted, our newsletter full of simple strategies to work smarter and live better, from the Fortune Well team. Sign up today.

Read More

CEO DailyCFO DailyBroadsheetData SheetTerm Sheet