Hello, and welcome to January’s special edition of Eye on A.I.
First off, if you want to learn more about how OpenAI ushered in what just might be A.I.’s Netscape Navigator moment, please read my cover story in the February/March issue of Fortune, which was published online yesterday. The story details how OpenAI co-founder and CEO Sam Altman transformed what was once a nonprofit research lab little known outside the circle of A.I. researchers into Silicon Valley’s buzziest startup, with billions in investment from Microsoft and a $30 billion valuation. And it walks through the potential implications—for both good and ill—of ChatGPT and Microsoft’s strategic partnership with OpenAI.
Now, I want to cover two very different but both very significant news items in this special issue. First, how do we measure and benchmark A.I. progress within an industry? Until now, the most common method has been to use self-reported surveys of executives within that industry. That’s what a lot of the big consulting and tech advisory firms do currently. And while that can be a good way to get a sense of perceptions of how A.I. adoption is progressing across a sector, surveys are usually not set up to allow ranking or benchmarking between firms within an industry. That’s where a new business intelligence firm called Evident, comes in. Evident was co-founded by Alexandra Mousavizadeh, the economist who built Tortoise Media’s Global AI Index, which has become a key benchmark of countries’ A.I. progress. Now, she is aiming to do the same thing for companies. Evident counts business guru and podcaster Scott Galloway as an advisor.
Evident is starting with the banking sector, releasing its first A.I. index of global banks today. The index ranks the banks on 150 different indicators, divided into four main pillars which it calls Talent, Innovation, Leadership, and Transparency (a metric that includes a company’s responsible A.I. policies and governance procedures). The data for the Index comes from publicly-available sources. The Talent and Innovation pillars are also weighted more heavily in the Index’s final ranking than the other two pillars, although all of them are important, Mousavizadeh says.
JPMorgan Chase tops the ranking, emerging head and shoulders above the others with the best score across all four of the key pillars. That’s not too surprising to those who’ve been following A.I. developments closely. The bank has been spending what CEO Jamie Dimon has said is “hundreds of millions of dollars per year” on A.I. efforts across the bank, which are in turn part of a broader technology drive on which the bank is spending an astounding $14 billion per year. It recruited Manuela Veloso, who had headed the prestigious machine learning department at Carnegie Mellon University, in 2018 to head up an in-house R&D lab that was modeled in some ways on Google Brain, Meta’s A.I. Research arm, and OpenAI. It has spent heavily to recruit other top A.I. talent too and even more on getting its data and cloud infrastructure in shape to support machine learning. And Dimon has been willing to repeatedly defend such heavy spending to skeptical Wall Street analysts.
What’s more surprising is where some of the other banks rank. In the number two slot is the Royal Bank of Canada, which might surprise some people. But again the bank has invested smartly in recruiting talent from Canada’s well-respected academic machine learning labs, ranking seventh overall on talent, and it has been able to use that talent efficiently, ranking third on the Evident AI Index’s Innovation pillar. In fact, Canada has two banks near the top of the index: Toronto-Dominion Bank is ranked sixth. Rounding out the top five are Citigroup, UBS Group, and Wells Fargo.
Also surprising is that some of the big Wall Street firms, such as Goldman Sachs and Morgan Stanley, only rank in the middle of the Index (11th and 10th respectively). Both banks fall down on Transparency (ranking 19th and 20th) since they are relatively secret about what policies they have in place to govern the use of A.I. within their organizations and ensure responsible use.
Mousavizadeh says that many banks have told her that Evident’s index is providing them the first clear look at how they stack up compared to competitors and peers. “We have spoken to almost all the banks in the Index and they have all said that this mosaic of indicators is giving them the most accurate picture they’ve ever had of where we are as a bank in terms of our A.I. Deployment,” she says. She also says that it is already clear from the Index that different banks are taking different approaches in terms of building their own A.I. capabilities versus buying products and services from outside software vendors.
But Mousavizadeh is also quick to point out the limitations of the Index: It can only assess capabilities, not necessarily how successfully those capabilities are being deployed in terms of financial returns on investment. “Right now, build versus buy is evenly weighted in the index, we are not passing judgement,” she says. “In terms of impact, the jury is out.” But in the future, it might be possible to correlate one particular A.I. development and deployment strategy with greater commercial success, she says.
While the list currently includes just 23 large global banks, Mousavizadeh tells me that Evident is in the process of expanding the Index to include more regional and digital-first banks. And she says that Evident’s goal is to expand its indices to other industries, with the goal of having data on 1,000 different companies indexed within four years. It is exactly this kind of benchmarking that companies—and their investors—are hungry for as they try to assess how well they are doing as A.I. enters its “industrialization” phase.
Now, I want to shift gears entirely and bring you another bit of news that shows why generative A.I. may really justify the hype currently surrounding the technology. A new research paper published today in the scientific journal Nature Biotechnology shows that a large language model, the same type of A.I. that underpins ChatGPT, can be used to design completely new proteins, directly from natural language instructions about what function the scientists want the protein to have. The scientists synthesized and benchmarked the efficacy of these proteins (in this case, they were enzymes) against naturally-occurring enzymes, and found that they were all highly-effective—in many cases even more effective than naturally occurring enzymes. What’s more, the A.I. did this straight “out of the box” with no specific fine-tuning for any particular enzyme category or functional requirement.
The research was carried out by Profluent Bio, a tiny San Francisco A.I. startup, working with scientists at the University of California in San Francisco. The researchers trained their system, which takes in about 1 billion different statistical connections between data points, on the 280 million natural proteins that have been genetically sequenced, and then incorporated tags about the function of those proteins. (This is somewhat similar to the way a text-to-image generator like DALL-E or Stable Diffusion learns what captions correspond to which images.)
Ali Madani, the former Salesforce A.I. researcher who is co-founder and CEO of Profluent, told me that the method is exciting because of its potential implications for rapidly advancing drug discovery. Because the system can generate novel proteins that bear little relationship to natural proteins and yet perform the same function, it could be used to find medicines that will be as effective as existing ones but produce fewer side effects. It could also help create new form factors for medicine (think pills rather than injections or IV drips) because researchers could specify in plain language what the desired thermal stability profile the new protein should have. It could also possibly lower drug costs by increasing competition—because companies will be able to find novel proteins that do the same thing as ones that are already protected by patents. Proteins have other uses outside of medicine too and tools such as Profluent’s could also be used to create enzymes for industrial customers and consumer products companies.
Profluent, which is also building its own wet lab to carry out protein synthesis and testing, is now looking for industry partners to help it put its techniques into practice, Madani says. And it is not the only company seeking to use generative A.I. models in this way: Cambridge, Mass.-based Generate Biomedicines is pioneering similar techniques in protein design and Absci, which is based in Vancouver, Washington, has produced new antibodies with generative models. And then there’s DeepMind co-founder and CEO Demis Hassabis, who is also now spearheading DeepMind spin-out Isomorphic Labs, which is using AlphaFold’s protein structure predictions and other methods, which may include generative A.I. techniques, to improve drug discovery.
And with that, here’s a few additional news items in what is becoming a very busy time for A.I.
Correction, Jan. 27: This story has been updated to correct the spelling of the last name of the computer scientist JPMorgan Chase hired to lead its A.I. research lab. An earlier version of this story also misstated the amount JPMorgan Chase is spending on technology annually. The bank is spending $14 billion per year, not $12 billion.
A.I. IN THE NEWS
DeepMind lays off staff, closes Edmonton office. Bloomberg reported that the Alphabet-owned A.I. company is closing its office in Edmonton, Canada, which it opened with great fanfare in 2017. The company later confirmed the news and said that researchers affiliated with the office were being offered the opportunity to relocate to DeepMind’s offices elsewhere if they wished, but that support staff was being let go. Bloomberg, citing internal documents it had seen, also reported that some support staff at DeepMind’s London headquarters have been laid off. The cuts may be part of broader belt-tightening across Alphabet, including up to 12,000 job cuts at Google. DeepMind’s Edmonton office was opened as part of DeepMind’s hiring of Richard Sutton, a famed expert in reinforcement learning at the University of Alberta, and its office employed researchers from Sutton’s lab who were also experts in reinforcement learning, the A.I. technique that underpinned most of DeepMind’s early A.I. breakthroughs. It is not clear if Sutton is remaining at DeepMind or what the lab’s closure may signal about DeepMind’s future research priorities.
A.I. should be regulated like nuclear weapons, researchers tell U.K. parliament. A group of researchers from the University of Oxford told British lawmakers that advanced A.I. had the potential to destroy humanity and, as a result, should be as strictly regulated as nuclear weapons technology. The researchers highlighted the possibility that advanced A.I. could alter its own computer code in ways that would make it difficult for humans to control, publication The Week reported.
U.S. government agencies publish guidelines for A.I. in several areas. The U.S. National Institute of Standards and Technology (NIST) published a framework for how A.I. risks should be managed and mitigated. The framework will impact how U.S. government agencies, and companies contracting with the U.S. government, assess and manage A.I. risks. Meanwhile, the Pentagon released an updated set of guidelines for the development and deployment of autonomous weapons systems, making it clear that such weapons were likely to play a major role in any future conflict. The policy says that the humans deploying such weapons will bear ultimate responsibility for the consequences of their use. It also establishes a review process to oversee weapons that have autonomous capabilities, even in cases where those are added to existing weapons systems.
Our mission to make business better is fueled by readers like you. To enjoy unlimited access to our journalism, subscribe today.