Like all pharmaceutical companies, Bristol-Myers Squibb is required to conduct “post-market surveillance.” That is, once a drug is approved and being sold, the company needs to know if any patients are experiencing adverse reactions or unusual side effects that didn’t crop up during the clinical trials. It also needs to be on alert for signs that the medicine is being prescribed “off-label” (meaning for conditions for which it has not been formally approved by medical regulators) or that people are abusing the medicine in some way, maybe as a recreational drug. But while there are official channels for reporting adverse reactions, discovering off-label uses and abuse is much trickier. Recently Bristol-Myers Squibb decided to see if A.I. could help.
The pharma giant teamed up with Cortical.io, a startup based in Vienna, Austria that specializes in natural language processing software, to run a pilot project. The company uses a technique called semantic folding—which is based on a theory about how part of the brain, the neocortex, represents information. The technique is designed to represent words with similar meanings as existing close to one another on a two-dimensional grid, with each word having a unique “semantic fingerprint.” This differs from other, slightly-newer natural language techniques, such as the ultra-large language models, which create a kind of map of the average distance in a huge training set of text between a given word and every other word in a language.
One advantage of Cortical’s method is that it takes less data and less computing power to train and run the A.I. system than would be the case with an ultra-large language model. “That was one of the major appeals,” says Brian Dreyfus, an epidemiologist in the Bristol-Myers Squibb worldwide safety department, who worked on the project.
Taylor Peer, Cortical’s director of data science, said his company initially trained its A.I. software on hundreds of thousands of unlabeled documents, including articles from Wikipedia talking about pharmaceutical drugs and medicines as well as information from medical journals and websites, so that it could learn to understand the meaning of medical terms. It then fine-tuned this model to be able to make classifications on a couple hundred Reddit posts specifically about off-label drug usage that had been curated and annotated by humans.
Once trained, Cortical set the algorithm loose on 2.2 million Reddit posts that mentioned medications, looking for users whose comments either mentioned or suggested off-label use of six different drugs. Three of these drugs were a “test set.” These were drugs that Bristol-Myers Squibb already strongly suspected were being used off-label based on a review of academic research, media reports, and a human review of some of the social media posts. The other three drugs were a “negative control group,” where Bristol-Myers Squibb did not think off-label use was taking place. None of the drugs in the study were made by Bristol-Myers Squibb.
The A.I. system found significant mentions of off-label use—ranging from 4.7% to 16.5% of posts—for the test set. For two of the controls, it found only 0.5% and 0.2%. But surprisingly, for a third control, a drug called Suboxone, made by the Reckitt Benckiser Group and often used to treat opioid addiction, it did find a lot of posts—6%—that indicated off-label use. Dreyfus said that when Bristol-Myers Squibb investigated this further, it found that Cortical’s A.I. was correct and that there were off-label uses of Suboxone that the pharma company had never picked up on previously.
Bristol-Myers Squibb is now looking at whether it might deploy this kind of system more broadly to conduct regular after-market surveillance of social media.
The example illustrates a number of key themes in today’s A.I. One is the importance of advances in natural language processing. Dreyfus told me that in the world of patient safety, most researchers had been skeptical about the value of social media because it seemed so difficult to mine for insights—it would have required armies of human reviewers and simple keyword searches were never going to cut it. But this project showed natural language processing could definitely yield results.
Another big trend is that companies are looking for methods that are less data-intensive, or at least which don’t require a lot of human-labelled data, because it is too difficult and expensive to gather and annotate. They also prefer A.I. algorithms that don’t take massive amounts of computing power to train and run, because that’s very expensive too.
Finally, Cortical’s semantic folding method shows that there’s value in researchers continuing to try to draw inspiration from how the human brain processes and encodes information. Too many of today’s A.I. efforts use neural networks—which are themselves brain-inspired—but otherwise don’t explicitly try to mimic how the brain processes and stores information. Given that the human brain is the most advanced natural intelligence we know of, it seems wise to continue to look to it for insights.
With that, here’s the rest of this week’s news in A.I.
Jeremy Kahn
@jeremyakahn
jeremy.kahn@fortune.com
January 26:This story has been updated to clarify that none of the drugs examined in the pilot project are manufactured by Bristol-Myers Squibb and to provide more information about the method the company’s researchers used to find drugs for the test and control groups.
A.I. IN THE NEWS
IBM sells part of its Watson Health division to private equity group. Investment firm Francisco Partners is buying up extensive parts of the A.I.-enabled business, including data sets and imaging software, for more than $1 billion, according to Bloomberg News. Launched in 2015, Watson Health never lived up to its promise to revolutionize cancer treatment and the unit had remained unprofitable, despite IBM spending about $4 billion on acquisitions to bolster it. IBM says it remains committed to its other Watson-branded A.I. products.
The U.S. Defense Department is pushing ahead on research to have A.I. fly fighter planes. The Defense Advanced Research Projects Agency (DARPA) had previously shown in computer-simulated dog fights that A.I. pilots can trump human fighter jocks. By 2024, it plans to put such systems in real jets and see how they perform in live A.I. dog fights over Lake Ontario, according to a feature in The New Yorker. The story raises interesting questions about how far the military will push the use of autonomous weapons and, more broadly, what it requires to establish trust in A.I.-guided decision-making.
The supercomputer of the Metaverse. Facebook-parent Meta said it has built a supercomputer that’s intended for A.I.-specific tasks like training the company’s humongous language models that can understand and respond to text. Meta plans to continue improving its new supercomputer this year and plans for the machine to house a whopping 16,000 Nvidia GPUs, which are computer chips that are used to power machine learning.
Consolidation hits self-driving. Canadian automotive company Magna acquired Optimus Ride, a startup specializing in autonomous shuttles for an undisclosed price. Magna said it would gain 120 employees from Optimus, which was founded in 2015 after spinning out from Massachusetts Institute of Technology.
A.I. animation. Video game technology company Unity said it bought the startup Ziva Dynamics for an undisclosed amount. Ziva specialized in using machine learning to create realistic animations and digital movements in video games, 3D-animations, and movies.
EYE ON A.I. TALENT
Venture capital firm Greylock hired Mustafa Suleyman, the co-founder of the high-profile A.I. research firm DeepMind, which Google bought in 2014 for $650 million. In 2019, Google placed Suleyman on leave, but didn’t say why, after reports emerged of bullying behavior by the executive toward staff. In an interview with Greylock’s Reid Hoffman, posted on Greylock’s website, Suleyman said “I remain very sorry about the impact that that caused people and the hurt that people felt there.”
Ocient hired Ian Drury to be the data analytics startup’s chief technology officer. Drury was previously a general partner at OCA Ventures.
EYE ON A.I. RESEARCH
A.I. learns from so much. Facebook parent Meta published a non-peer reviewed research paper detailing a new machine learning technique called data2vec. The authors wrote that this technique helps neural networks—the underlying software used for deep learning—learn from many kinds of data, like both images and text, at the same time. The data2vec technique is helpful because it reduces the amount of time used to train neural networks while creating more powerful A.I. software that can perform a variety of tasks when fed a multitude of data.
“People experience the world through a combination of sight, sound and words, and systems like this could one day understand the world the way we do,” Meta CEO Mark Zuckerberg said in a statement about the research. “This will all eventually get built into AR glasses with an AI assistant so, for example, it could help you cook dinner, noticing if you miss an ingredient, prompting you to turn down the heat, or more complex tasks."
FORTUNE ON A.I.
Responsible A.I. can’t exist without human-centered design—By Mahesh Saptharishi
What Bank of America’s chief data scientist thinks about getting a master’s degree in the field—By Sydney Lake
To combat inflation, Biden tells Congress it must gift US chip industry billions—By Christiaan Hetzner
Men are creating A.I. girlfriends, verbally abusing them, and bragging about it on Reddit—By Amiah Taylor
Hong Kong’s mass hamster cull prompts an NFT protest as animals are ‘resurrected’ online—By Yvonne Lau
BRAIN FOOD
The A.I. gamemaster. What fun is playing a game when we know that in the end, computers will always defeat even the best human? Recent advancements in A.I., such as DeepMind’s AlphaZero software that learned to dominate games like Chess and Go, have ushered in a new era in which computers are posing an existential dilemma upon humans. Two articles recently published in The Wall Street Journal and The New York Times explore this A.I. games dilemma.
As poker master Doug Polk told the Times in an article about A.I.’s impact on the card game, “I feel like it kind of killed the soul of the game.” He added that A.I. changed the game “from who can be the most creative problem-solver to who can memorize the most stuff and apply it.”
Meanwhile, The Wall Street Journal reviewed the book Seven Games: A Human History by author Oliver Roeder, who had this to say about A.I.’s impact on Chess: “The sole source of originality in chess is now the machine.”
From the review: In the end, we’re all going to have to learn to stop worrying and love the computer. That, at least, is Mr. Roeder’s view.“In stark ways,” he concludes, “the prevalence of superhuman chess machines in the world of professional chess is a glimpse into our own civilian future, when AI technologies will seep into our personal and professional lives.”
Our mission to make business better is fueled by readers like you. To enjoy unlimited access to our journalism, subscribe today.