CEO DailyCFO DailyBroadsheetData SheetTerm Sheet

A.I. conquers bridge, one of the last games to fall to computer brains

April 5, 2022, 6:05 PM UTC

Chess, Go, shogi, and poker are among the classic games that have, over the past two decades, fallen to A.I.’s dominance. Now you can add another to the list: bridge. Last week, in a special tournament in Paris, an A.I. system roundly defeated eight human world champions at the card game—or, at least an important phase of it—as my colleague Jonathan Vanian reported.

Games have served as important yardsticks for measuring A.I.’s progress because they mimic aspects of real-world problems that we would like A.I. to help us with, but they are contained environments. They provide a safe space in which A.I. researchers can experiment with new methods. They also tend to have clear quantitative metrics—points, wins and losses—that make it easy to assess progress.  

Bridge too contains elements that are important in the real world: Players have only partial information about what is in both their opponents’ and their partner’s hands. There is strategy. There can be deception. And the game is simultaneously adversarial and collaborative—involving two teams of two pitted against one another.

It is this last element that helped draw Jean-Baptiste Fantun and Veronique Ventos, the co-founders of French A.I. company Nukkai, to bridge. (It doesn’t hurt that they are both keen players themselves—Fantun was once ranked 15th in France—and met one another over the bridge table.)

The bridge-beating A.I. that Nukkai built, which it calls NooK, is different from the software that has in recent years bested humans at Go, shogi, and poker, as well as the complex video games Dota2 and Starcraft 2.

All of those achievements rested on deep learning—software composed of large neural networks, a type of A.I. that is meant to mimic in a very loose way the workings of neurons in the human brain. Those other game-beating milestones rested almost entirely on a type of training called reinforcement learning, where the A.I. learns from the experience of playing the game—often against itself—what the best strategies are. In some cases, these A.I. systems did not even know the rules of the game before they started playing—they had to puzzle those out as they played. In other cases, the software knew the rules, but was not given expert advice about how to play. Again, it had to puzzle that out for itself through the experience of playing.

NooK is different. It is a hybrid A.I. system. It combines some elements of deep learning with older A.I. techniques such as symbolic reasoning and inductive logical programming, to achieve its championship performance.

Building this kind of a system wasn’t easy—partly because it meant bringing together top researchers from each of these feuding A.I. “camps”—symbolic A.I. experts and deep learning scientists attend different conferences, publish in different journals, and generally distrust one another intellectually. “On the social networks, people are going at each other’s throats,” Fantun says of the sniping between symbolic A.I. proponents and deep learning purists. “Inside the company, it took some time to get people to trust one another and see value in combining different approaches. I would not say it was obvious from the beginning.”

But Fantun and Ventos, who herself comes from a symbolic A.I. background, see clear advantages from this kind of hybrid A.I. that can, ahem, bridge the gap between the symbolic and deep learning approaches. Unlike most deep learning methods, which are mostly “black box” techniques, where the rationale behind any specific decision is difficult to discern, the way NooK incorporates symbolic logic and explicit probabilistic reasoning allows for clear explanations of the A.I.’s decisions.

Many deep learning techniques are designed to either replace humans or to serve as “decision support” for humans, where humans take insights from the A.I. The problem, Ventos says, is that the communication is all one-way: from machine to human. In Nukkai’s A.I. systems, the communication is designed to be two-way, with the A.I. also learning from humans. In the case of its bridge A.I., the rules of the game were hard-coded into the software and human expertise was used to validate the probabilistic decisions the software developed.

Finally, this kind of hybrid A.I. is far more efficient in terms of the data it needs to ingest and the computing resources the A.I. needs for training—and that ultimately means it has a much smaller carbon footprint—than a pure deep learning designed for the same task would be. Ventos said that NooK could learn from just 200 examples, as opposed to millions or billions of examples that many pure deep learning systems require.

NooK uses neural networks mostly to figure out the opening strategies of the game. It then uses a technique found in many game-playing A.I.s—Monte Carlo simulations—to pick the best strategy from these openings, constrained by knowing the rules of the game and human knowledge about how signaling works. The rationale for why it chooses a particular strategy can be accessed in high-level representations that human bridge champions can understand and validate. This information can also then be fed back into the system to help refine its strategies. It also learns from experience—the more games it plays, the better it becomes.

The part of bridge that NooK has conquered is called declarer play—in each game, it was the player that decides which suit is trump, or who says there will be no trumps, for that game. To be clear, this is not the same thing as being able to beat humans at the entire game, with all of its phases. Next, Nukkai wants to create a system that understands how to be the other team in the game, which is known as “the defense.” Finally, Fantun would like to see if an A.I. can handle the bidding phase of contract bridge, which often involves a lot of communication and deception.

Fantun said the Nukkai, which was founded in 2018 with bridge as its grand challenge goal, has already used its hybrid approach for several commercial uses. It has been working with the defense sector, which likes the idea of an A.I. that understands how to master a game that is both collaborative and adversarial, that has a role for human input, and that can handle incomplete information. It is also working for a large airline company and in cybersecurity. And it has built software for education that tailors lessons to each individual student’s learning style. Fantun said there has also been some interest in applying the techniques Nukkai has perfected with bridge in finance.

Perhaps bridge will demonstrate the value of a hybrid approach in other domains as well.

And with that, here’s the rest of this week’s A.I. news.

Jeremy Kahn
@jeremyakahn
jeremy.kahn@fortune.com

A.I. IN THE NEWS

DeepMind accused of mishandling sexual assault, harassment allegations. An anonymous former researcher at the Alphabet-owned A.I. company told The Financial Times that after she reported being sexually assaulted by colleague, DeepMind took more than 10 months to resolve the case. She and a representative of trade union Unite also said that her case was just one of several similar incidents where the company failed to deal promptly and adequately with sexual harassment, sexual assault, and bullying in the workplace. DeepMind told Fortune that it had thoroughly investigated the researcher's allegations against her colleague and dismissed the alleged perpetrator without severance payment. "We expect everyone—regardless of their role or seniority—to behave in a way that lives up to our values," the company said. It also said it had made changes to its policies and procedures since the incident. But the former researcher published an anonymous open letter calling on the company to do more.

Intel buys Israeli A.I. startup Granulate for a reported $650 million. The startup uses A.I. to figure out how best to distribute big computing workloads across on-premise servers and both public and private cloud-based datacenters so that they run as efficiently as possible. Intel is paying $650 million for the acquisition, sources told TechCrunch.

Meta's A.I. research team suffers a string of defections. The company has lost a number of top A.I. researchers in the past few months, according to CNBC. Most of these researchers have left to join rival A.I. labs and startups. Some outside commentators whom CNBC talked to attributed the staff attrition to the company's pivot to the metaverse, which they say may have alienated employees working on A.I. endeavors and research that was not as closely linked to the virtual reality world Meta CEO Mark Zuckerberg is trying to create. But Meta's chief AI scientist Yann LeCun, who co-founded the firm’s AI lab in 2013, denied there was any sudden jump in staff attrition, telling CNBC that “people have changing interest[s] and move on.” And Fortune has heard from other sources that Meta has been on a hiring spree for employees with expertise in computer vision, which may dovetail more directly with its metaverse ambitions. 

EYE ON A.I. TALENT

The Healthcare Products Collaborative, which was formed by the Association of Food and Drug Officials (AFDO) and the Regulatory Affairs Professionals Society (RAPS), has announced that Timothy Hsu will serve in the new role of Healthcare Technology Innovation Director, working on, among other things, the collaboratives work on A.I. technology. 

Disney has hired Jeremy Doig to be the chief information officer of its Disney Streaming division, according to The Hollywood ReporterDoig previously held several roles at Google where he worked on streaming for YouTube and video technology elements of Google Chrome.

LinkedIn has hired Lei Yang as vice president, engineering to lead the Flagship Experience team, the company said in a blog post. She was previously a machine learning engineer at Twitter, Quora, and Google. The company also hired Donald Thompson as a distinguished engineer reporting directly to LinkedIn's chief technology officer, the company said in a different post. Thompson spent many years at companies such as Splunk and Microsoft, where he founded the company's Microsoft knowledge graph.

EYE ON A.I. RESEARCH

Google says language algorithms keep getting better the larger they are. Ultra-large language models are A.I. systems that can learn to manipulate language and perform a wide variety of language tasks—translation, answering questions, composing novel passages of text, summarization—with little additional training. Usually, they are initially trained on a single task—such as filling in a hidden word in a sentence—using a huge body of books, documents and information scraped from the Internet. Google has recently said that its own research on its Pathways Language Model (PaLM), which takes in some 540 billion variables at a time, shows that these systems become more capable across a range of tasks the larger they are, according to a story in tech publication The Register. PaLM showed improvement across a range of benchmarks compared to smaller systems.

This matters because these ultra-large language models are extremely expensive to train and their increasing popularity has raised serious concerns about their carbon footprint. There are also ethical concerns because the algorithms tend to pick up a lot of human prejudices with regards to race, gender, and religion from their training data and the models can often be prompted to use toxic or derogatory language. Some researchers have speculated that the performance of these systems will eventually plateau and argue that they do not represent a path towards real language understanding. They say the downsides of these large models are not worth the benefits. Others, however, are convinced that continuing to make the ultra-large language models ever larger could be an important step towards artificial general intelligence. The Google research says that, so far, there's no sign of performance dropping off.

FORTUNE ON A.I.

Commentary: What do Starbucks, Tesla, and John Deere have in common? They’ve used A.I. to reinvent their businesses—by Francois Candelon, Bowen Ding, and Su Min Ha

The billionaire who sent robots to the moon is now working with Paris Hilton to cure deadly diseases—by Mahnoor Khan

Supercomputer expert wins $1 million ‘Nobel Prize of computing’—by Jeremy Kahn

Chinese tech giant banned by the U.S. has been an early winner from Russia’s war on Ukraine—by Vivienne Walt

BRAIN FOOD

U.S. Copyright Office says A.I.-created art cannot be copyrighted. The government body rejected an appeal of its previous decision to deny a copyright application that asked that the copyright for a work of art generated by an A.I. program be assigned to the software, according to an article in Smithsonian. The initial application and the appeal are part of a project by legal scholar Stephen Thaler to test the boundaries of intellectual property law in the age of A.I. in different jurisdictions worldwide. The U.S. has been firm so far in ruling that copyright can only be held by a human. This has caused consternation among some technologists who believe A.I. will increasingly be used to generate art and that the U.S. government's current position on intellectual property rights will make it very difficult for those using A.I. to help generate art and other media to protect their incomes.

But it seems to me that this is still not a huge problem as long as the authorship is attributed to whatever person or corporation deployed the A.I. system and told it to create the piece of art—that the problem has less to do with the fact that the art is created by A.I. than with the fact that Thaler wants to give the A.I. itself legal ownership rights, as opposed to the company or person who used the A.I. What do you think? Any copyright lawyers among our loyal "Eye on A.I." readership who would care to weigh in? Please email us!

Our mission to make business better is fueled by readers like you. To enjoy unlimited access to our journalism, subscribe today.