It only looks simple: the complex human decisions behind an “easy” A.I. use case

March 2, 2021, 4:35 PM UTC
A photo of Chanel's new Lipscanner iphone app being used.
Chanel's new Lipscanner iPhone app uses A.I. to match any color to one of the luxury brand's lipsticks and then allows the customer to virtually "try on" that shade.
Photo courtesy of Chanel

This is the web version of Eye on A.I., Fortune’s weekly newsletter covering artificial intelligence and business. To get it delivered weekly to your in-box, sign up here.

As we’ve noted in this newsletter before, sometimes even simple applications of A.I. can be transformative. But it is worth remembering that simple things can take a lot of thought, planning, and skill to do right.

A case in point: The luxury fashion brand Chanel a few weeks ago debuted Lipscanner. It’s an iPhone app that allows a user to take a photo of any color and find the lipstick shade from Chanel’s collection that most closely matches it. Then the user can “try that lipstick on” virtually, using augmented reality on their camera phone.

Sounds simple, right? But as Cedric Begon, the director of the Connected Experience Innovation Lab at Chanel, which built Lipscanner, says, it isn’t. “This wasn’t easy at all,” he tells me.

The team that built the product worked on it for more than 18 months. It required tens of thousands of images—many of them annotated by the company’s own fashion experts—to train the algorithm to color match. Luckily, the luxury brand has a huge library of its own photographic imagery from past marketing campaigns and product development tests, allowing it to sidestep some of the ethical problems that have arisen when companies have tried to train computer vision A.I. software to work with human faces. In many cases, those companies have resorted to scrapping training data from the Internet, which has created problems around what’s known as “non-consensual images.”

But this data was not enough to ensure that Lipscanner would work equally well for the lips of all its potential customers. Begon says the company was well aware of the fact that it had far more images of white faces and lips in its database than those from dark-skinned people. So, to ensure its algorithm would work equally well for people of all races, the company turned to synthetic data. In other words, it used a different A.I. system to generate new, fictional images with the characteristics it wanted to have represented in the dataset. This was particularly important, Begon says, to train the algorithm to deal with all the ways in which lighting can alter the perception of color and texture.

That’s an approach more companies should consider using. It won’t solve the problem of bias in every case—for one thing, you need to make sure that the fictional data you are generating is a good representation of the real world. But it is a way to try to overcome some of the problems that can exist when a dataset doesn’t contain enough representative examples of, say, people of color, or women in certain roles, or just any demographic that might be undersampled in a data set.

To get it right, though, requires having the right team in place and Begon emphasized that creating Lipscanner was too important an initiative to leave it completely up to Chanel’s team of machine learning experts, talented as they are. “This cross- functional way of working is in the DNA of the house,” Begon says of Chanel. “It’s a cultural strength of the company.” He said the team involved in the creation of the A.I.-enabled tool included designers and product managers, data scientists and machine learning engineers, IT experts, marketing experts and lawyers. “This product goes to the heart of the nature of the relationship between the customer and the product and that requires a sophisticated integration of many points of view,” he says.

You’ll notice Begon mentioned lawyers there. One of the biggest issues around a product like Lipscanner is not only the data used to train the A.I. system and where it comes from, but also what happens with the data it uses when making the lipstick suggestions and virtual try-ons: the images users capture on their phones. That can be thorny legal terrain, especially in Europe, where a company might easily run afoul of the European Union’s stringent data protection laws (the General Data Protection Regulation, or GDPR).

Begon says Chanel decided to take a radical approach to dealing with that issue: It doesn’t handle data generated by the user at all. Instead, all of the A.I. “inference,” the actual running of the A.I. model to find the right color match, takes place inside the app on the user’s phone, with no images or camera data ever being transmitted to Chanel. “We don’t collect a single piece of personal data,” Begon says. (This is also where all that synthetic data helps, he says. Chanel doesn’t need more images of real customers to further refine the system.) This choice made designing the software more difficult from a technical standpoint. But it made it much easier from a data privacy and legal perspective.

See, nothing simple about it. And with that here’s the rest of this week’s A.I. news.

Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn

A.I. IN THE NEWS

Google's damage control over its A.I. research policies continues. In the wake of firing of two prominent A.I. ethics researchers who had balked at Google's efforts to quash a paper they had written raising ethical concerns about natural language models that Google itself helped pioneer, the company has run into further concerns from researchers about censorship. According to an email leaked to Reuters, researcher Nicholas Carlini complained to colleagues about Google lawyers making "deeply insidious" edits to research papers to tone down language that could be viewed as critical of technology the company makes. This issue was apparently also raised in a meeting of Google A.I. researchers last week meant to help resolve tensions between the scientists and company managers due to the firing of Timnit Gebru and Margaret Mitchell and the reorganization of responsible A.I. research across the company. Jeff Dean, Google’s senior vice president overseeing the A.I. research division, told staff that the policy surrounding these "sensitive topics" reviews by company lawyers and public relations executives "is and was confusing" and that he had tasked Zoubin Ghahramani, a well-known University of Cambridge machine learning professor who joined Google from Uber in September, with clarifying the policies.

Virginia votes to ban police use of facial recognition technology.The prohibition on police departments using facial recognition technology unless specifically authorized by the state legislature comes after The Virginia Pilot newspaper exposed how police in Norfolk were using the controversial facial recognition app created by Clearview AI. It turned out members of the city's gang crime unit had downloaded and used the app without authorization from the mayor of local city council. You can read more about it here

Microsoft is bringing automatic sentence completion to Microsoft Word. Beginning this month, all users of the ubiquitous word processor will now have the ability to suggest an automatic completion of an entire sentence, matching a feature already available in Google's competing Google docs product, according to a report in the tech publication Neowin. The feature had been available to some Microsoft Word beta testers since September. The autocompletion technology is driven by new A.I. large language models, which are trained to predict masked words in sentences. But Microsoft, which recently announced that it had licensed OpenAI's GPT-3 for its own commercial use, has not said whether it is GPT-3 powering the new feature or some other, perhaps slightly smaller, large language model.

James Bond is using A.I. now, natch. The director of Britain's signals intelligence agency, GCHQ, has said that A.I. could have a profound effect on the way the organization operates. Jeremy Fleming, in a series of public statements in the British press, said that machine learning could detect patterns in vast troves of data that human analysts might miss, enabling the agency to uncover foreign hackers lurking in networks or patterns that might signal an imminent terrorist attack. It also might be used to uncover disinformation campaigns and those trafficking in child pornography. But he also acknowledged that use of the technology came with important ethical concerns and considerations.The Guardian said that Fleming's rare public engagement came as the agency also published a paper defending its use of machine learning against critics who charged that the agency collects far too much information on average British citizens. The spooks, though, seem to think the public could be more accepting of this mass surveillance if it is algorithms trolling through this data for patterns, rather than humans who might have access to their personal data.

Deepfake Tom Cruise takes the Internet by storm... Last week brought a flurry of news around slightly creepy uses of computer vision A.I. algorithms. First, there were a trio of superbly executed and mysterious deepfakes—highly plausible fake videos in which one person's head is swapped onto another person's body, created with an A.I. technique—of the actor Tom Cruise that went viral after being uploaded to TikTok. I wrote about how video forensics experts were able to tell they were deepfakes and why Cruise is such a popular target for these fake videos here, and I broke the news that Belgian visual effects maestro Chris Ume actually created the deepfakes here. While Ume has sought to downplay any concerns that such amazing visual effects are a game-changer for political disinformation, other security experts are indeed concerned.

...while apps powered by similar A.I. techniques that can "reanimate the dead" go viral too. Last week also saw the debut of a couple of apps that allow users to take any still photograph and animate the face of the person who appears in it. These are not exactly deepfakes—although they do use similar machine learning methods. These apps went viral too. One of them, Deep Nostalgia, marketed itself specifically for people who wanted to animate photographs of deceased relatives as a way of essentially resurrecting the dead. I was not the only person to find this use case somewhat creepy. Another, called tokkingheads, created by startup Rosebud AI, ratcheted up hundreds of downloads per minute, according to the company. 

EYE ON A.I. TALENT

Reprise, a New York-based performance marketing agency that is part of IPG, has appointed Vincent Spruyt as Global Chief AI Officer, according to a company press release. He will lead a newly formed Artificial Intelligence (AI) group as well as the agency’s established team of Analytics experts, the company said. Spruyt was previously chief innovation officer at A.I. startup Sentiance

Science Applications International Corp. (SAIC), the Reston, Viriginia-based defense contractor and Fortune 500 company, has hired Michael Scruggs as senior vice president of artificial intelligence, according to a company announcement in Virginia Business. Scruggs was previously worked on A.I., cloud computing and data science at IBM.

Intersect360 Research, a Sunnyvale, California-based technology market intelligence and consulting firm that specializes in high performance computing and A.I., has named Dan Olds as its new chief research officer, according to a company release published on HPCwire.com. Olds had previously been a partner and analyst at the research firm OrionX

EYE ON A.I. RESEARCH

A breakthrough in getting A.I. to master Atari games may have plenty of real-world applications too. Machine learning scientists at San Francisco A.I. research company OpenAI announced that they had created an algorithm that could use reinforcement learning to achieve super human results on all 55 classic Atari 2600 games. The system also outperformed previous reinforcement learning algorithms, making especially big leaps in performance on some tough games, such as Montezuma's Revenge and Pitfall, that had continued to stymie most A.I. systems years after DeepMind first demonstrated that such algorithms could achieve superhuman results on gamesThe interesting learning strategy that the A.I. software uses to master these difficult games is worth noting here for its potential applications in real-world areas, particularly in robotics and cases in which it is possible to use a simulator to teach the A.I. agent. That is probably the reason the research paper managed to land a coveted publication spot in the scientific journal Nature. (A free version can be found here on the research repository arxiv.org.)

The algorithm, which OpenAI calls Go-Explore, solves a dilemma in reinforcement learning where an A.I. agent has to decide how much to search out new avenues for achieving some goal, in order to hopefully find the optimal way for doing something, and how much to continue along a path that seemed initially promising but which might not ultimately be the best path. This is particularly problematic with most reinforcement learning algorithms because they lack long-term memory and recent rewards tend to obliterate previous successes, so they can easily get distracted exploring new ways of doing something and completely disregard earlier decisions that showed promise.

The OpenAI team solved this by essentially giving the A.I. agent a kind of memory. It builds an archive of all the previous actions and places it has explored. It then selects one of these states to return to and explore further based on some rule-of-thumb guide for those states that are likely to be most promising—such as returning to places where the game score was highest or based on some expert knowledge of the game. For the hardest Atari games, which generally have very few interim rewards, this method was also combined with what the researchers called a "robustification phase" in which the algorithm learns specific trajectories through the environment that have gotten it closest to obtaining its goal in the past.

FORTUNE ON A.I.

Startup says A.I. helped it find treatment for rare lung disease in record time—by Jeremy Kahn

This A.I. company has discovered how to unlock creativity—even with employees stuck at home—by S. Mitra Kalita

A.I. drone maker Skydio takes off—by Aaron Pressman

Here’s who created those viral Tom Cruise deepfake videos—by Jeremy Kahn

BRAIN FOOD

I got a press release the other day from the Vatican. It seems it has been a year since the Pontifical Academy for Life, an official think tank of the Catholic Church dedicated to Catholic thinking and theology around bioethics, joined with Microsoft, IBM, the Italian Ministry for Technological Innovation, and the United Nations' Food and Agriculture Organization to issue the Rome Call for Artificial Intelligence. I covered the Call when it was originally promulgated and endorsed by Pope Francis. It consists of six broad principles that are designed to ensure A.I. is developed for the benefit of humanity and the environment in which they live. But at the time, I wondered how much impact these broad principles would really have.

Sure enough, in the one-year-update press release, the Pontifical Academy for Life noted that "the family of signatories has grown" (although it wasn't clear who exactly had joined; the only new name on the Academy's Rome Call website is Sapienza University of Rome) and that "a channel of dialogue with monotheistic religions is open, in order to converge on a common vision of technology at the service of all humanity." But again, it wasn't clear what that channel was or what exactly was happening.

There are a lot of these broad statements of A.I. principles out there now. Companies generally love to sign up to them (in fact, I was surprised that the Pontifical Academy hadn't managed to rope in more tech companies over the past 12 months). And maybe that's because they mostly amount to ethics washing. The companies get to tout their righteousness and the mere act of signing the compacts makes it seem like they are doing something. But, in reality, they can just go about their business as before. They haven't actually committed to anything concrete.

Without firm goals and commitments and ways of at least naming and shaming those who fail to follow these guidelines, A.I. ethics charters such as this one can wind up being worse than useless. They can actually impede real progress on making sure A.I. is used responsibly. 

Subscribe to Well Adjusted, our newsletter full of simple strategies to work smarter and live better, from the Fortune Well team. Sign up today.

Read More

CEO DailyCFO DailyBroadsheetData SheetTerm Sheet