Artificial IntelligenceCryptocurrencyMetaverseCybersecurityTech Forward

He’s worried A.I. may destroy humanity. Just don’t confuse him with Elon Musk

November 13, 2020, 12:00 PM UTC

The Jeopardy! question for “eccentric genius plutocrat best known for his concerns about artificial intelligence destroying the human race” is, without question, “Who is Tesla CEO Elon Musk?” But the technology investor who has arguably done the most to address this potential—if theoretical—threat is much more obscure: His name is Jaan Tallinn.

Jaan who? Like Musk, Tallinn is an engineer of a certain age—Musk is 49, Tallinn is 48—who made his money on one of the early 2000s’ greatest dotcom success stories. For Musk, it was PayPal. For Tallinn, it was Skype.

An Estonian computer programmer who was one of the pioneers of peer-to-peer file-sharing technology, Tallinn cofounded Kazaa and later used similar technology to help build Skype, where he was a cofounder and one of the first engineering hires. He then took the money he made from Skype and became a prominent investor in other European tech startups. Although he’s not been as financially successful as Musk, Tallinn’s not done badly. (A rival business publication estimated his net worth at $900 million in 2019.)

Like Musk, it was Tallinn’s experience as an early investor in London A.I. company DeepMind, now part of Google parent Alphabet, that first ignited his concerns about the potential that superhuman artificial intelligence may destroy the human race.

Tallinn cofounded the Centre for the Study of Existential Risk at Cambridge University as well as the Future of Life Institute in the other Cambridge—Massachusetts, that is. He is also a prominent donor to the Future of Humanity Institute, the University of Oxford think tank devoted to existential risk founded by philosopher Nick Bostrom, whose views on the potential dangers of superintelligent machines also influenced Musk, another of the institute’s funders. Tallinn has also given money to the Machine Intelligence Research Institute, a Berkeley organization dedicated to ensuring “smarter-than-human artificial intelligence has a positive impact.” And, again like Musk, he was an early backer of OpenAI, the San Francisco A.I. research company initially established as a kind of counterweight to Google and DeepMind.

Now Tallinn has made an unusual donation to one of the technology companies he has previously backed. It’s unusual for three reasons: First, the money is a gift, not an investment. Secondly, the money will fund a program that is not focused on the existential threat of superhuman intelligence, but on the more mundane risks of today’s A.I., such as algorithmic bias, lack of transparency, and concerns about data privacy. Finally, the donation was made entirely in cryptocurrency.

The Estonian investor gave Faculty AI, a fast-growing London-based company that helps create machine-learning systems for companies and governments, 350 units of Ether, the coin associated with the Ethereum blockchain, in January 2018, worth about $434,000 at the time, and 50 Bitcoins in March 2020, worth about $316,000, according to Faculty’s financial filings at the U.K. business registry Companies House, which are being made public this month. Tallinn had previously been a seed investor—using more conventional fiat currency—in Faculty. His investment company Metaplanet Holdings holds just under 9% of the company’s total shares, according to Companies House filings.

In recent months, Faculty, which has often drawn comparisons in the press to U.S. data analysis company Palantir, has been in the news for its work helping the U.K. government forecast the availability of ventilators and other medical equipment needed to address the COVID-19 pandemic. The contract, one of seven government contracts the company has recently received, was controversial because Faculty was awarded it outside the normal bidding process, and the government has refused to reveal the exact terms. Under a previous name, ASI Data Science, the company had helped the “Vote Leave” campaign in its successful push for Britain to leave the European Union and worked with Vote Leave’s director Dominic Cummings, who had been serving as a close aide to U.K. Prime Minister Boris Johnson. (Johnson abruptly dismissed Cummings Friday, according to U.K. news accounts.) Faculty also won an $800,000 contract from the U.K. Home Office to build an A.I. system that could detect terrorist propaganda on social media platforms.

The reason for donating the money to Faculty in cryptocurrency, Tallinn says, is that he keeps most of his personal wealth in that form, and converting it into cash would have resulted in an unnecessary capital gains tax bill, reducing the amount he could give.

The rationale for backing Faculty’s efforts to address the risk of today’s A.I. systems—instead of the existential risks from some future superintelligence—is that many of the considerations for making today’s A.I. less prone to bias and easier to understand could also reduce the risk of someone creating an A.I. that one day destroys the human race, Tallinn says. “Transparency and explainability are useful in current commercial settings, like, medical settings,” he tells Fortune. “However, it also might make it much safer to deploy something that is smarter than us.” For instance, such techniques might make it much easier to understand the intentions behind the actions of a smarter-than-human system.

Tallinn’s donation has helped Faculty to hire several experts in A.I. safety, company founder and chief executive Marc Warner says. The company’s safety research is organized around four pillars. “We believe that A.I. has to be fair, private, robust, and explainable,” he says.

Warner says that the public has been presented with a false choice between the safety and performance of A.I. systems, with some researchers and companies selling A.I. systems claiming that more transparent A.I. methods don’t work as well as more opaque techniques. “It’s just not true,” Warner says. He notes that cars have become both safer—with innovations such as headlights, windscreen wipers, seat belts, and airbags—as well as offering superior performance, and that the same thing can happen with A.I.

The company was commissioned by the U.K.’s Center for Data Ethics and Innovation to assess the latest approaches to A.I. fairness. It has built tools that help users understand how complex A.I. systems arrive at decisions that have been presented at prestigious A.I. conferences. It has also researched ways to create machine-learning systems that are better at figuring out causal relationships in data, not just correlations. That’s an important safety consideration, especially when using A.I. in medical and financial settings, Warner says. And the company has researched mathematical techniques to reveal and guard against bias in A.I. algorithms.

As for the safety of holding cryptocurrency on its books, that’s another matter. Warner admits that accepting Tallinn’s donation created a bookkeeping headache for Faculty. “Our accountants had to go and find someone else who was working on crypto and how to do accounting for crypto,” he tells Fortune.

Ultimately, the company listed the cryptocurrency on its book as an intangible asset. That means its value will be amortized over time, and any significant fall in the value of Ether or Bitcoin will result in Faculty taking an impairment charge. But if either cryptocurrency soars in value, that won’t be reflected on Faculty’s books. While this is the current consensus among accountants about how to handle cryptocurrency assets, it remains controversial, with some experts arguing cryptocurrency should be treated like any other financial instrument.

Faculty did sell about $144,000 worth of Ether between March 2019 and March 2020, according to its annual accounts. But given how cryptocurrency has been appreciating—the value of Bitcoin is up 300% so far this year, and Ether is up 144% year to date—the company seems happy to let most of Tallinn’s cryptocurrency donation remain on its books for a rainy day. After all, it might come in handy if A.I. safety efforts fail and our robot overlords arrive unexpectedly.

This story has been updated to include news of Dominic Cummings leaving his position as a senior aide to U.K. Prime Minister Boris Johnson.