How should companies think about using generative A.I.? While many businesses have rushed to embrace the technology, putting it directly into customer-facing products, many others are hesitant, afraid of copyright issues, the tendency of large language models to hallucinate (the A.I. industry’s preferred term for making up information), and worries about how expensive it is to run generative A.I. models at scale.
KPMG asked 225 executives at U.S. companies with revenues in excess of $1 billion annually for their views on generative A.I. The results, published yesterday, show that while the vast majority thought generative A.I. would have a major impact on their business in the next three to five years, 60% said they were probably still two years away from implementing their first generative A.I. solution. Cost and lack of a clear business case were cited as the primary concerns holding back implementation.
Worryingly, 68% of executives said their company had not appointed an individual to serve as the main lead for their company’s exploration of generative A.I. What’s more, while 90% of those responding to the survey said they had “moderate to highly significant” concerns about the risks of using generative A.I. and doubts about how to mitigate those risks, only 6% said they felt their company had a mature A.I. governance program in place.
Nvidia, the semiconductor company whose graphics processing units (GPUs) have become the go-to computer chips for running generative A.I. has clearly gotten the message that businesses’ concerns about risk are holding back adoption. That in turn might slow sales of Nvidia’s GPUs. In an effort to help businesses become more comfortable with generative A.I., Nvidia today announced an open-source platform it calls NeMo Guardrails that is designed to make it easy for companies to create safeguards around the use of large language models (LLMs). (Businesses can also access NeMo guardrails through Nvidia’s paid, cloud-based NeMo A.I. service—which is part of the semiconductor giant’s first foray into selling A.I. models and services directly to customers.)
NeMo Guardrails can produce three kinds of safeguards. The first is a “topic guardrail,” that will prevent the system from talking about subjects the creator defines as out-of-bounds. In an example Nvidia provided, a company could create a chatbot to answer human resources questions for employees, but set a guardrail instructing the system not to answer any inquiry involving confidential information, such as firmwide statistics on how many employees have taken parental leave. The system can also be used to define what Nvidia calls “a safety guardrail” which is a way to minimize the risk of hallucinations by essentially employing a fact-checking filter over the response the LLM generates. Finally, NeMo Guardrails can create a “security guardrail” that will prevent someone from using the LLM to perform certain kinds of tasks, such as using certain other software applications or making certain API calls using the internet.
NeMo Guardrails uses Python in the background to execute scripts using LangChang, the popular open-source framework for turning LLMs into applications that can integrate with other software. LangChang’s programming interface is similar to natural language, making it easier for even those without much coding expertise to create the guardrails. For some of the NeMo guardrails, the system deploys other language models to police the primary LLM’s output, Jonathan Cohen, Nvidia’s vice president of applied research, says.
But while NeMo Guardrails may help soothe businesses’ fears about some of the risks of using generative A.I., it won’t necessarily help allay their worries about the cost. Cohen admits that, depending on the kind of guardrails being implemented, NeMo Guardrails could increase the cost of running an LLM-based application.
In the new television sci-fi drama Mrs. Davis, which debuted on the Peacock network, Damon Lindelof, a cocreator and showrunner for Lost and The Leftovers, teamed up with Tara Hernandez, a writer on The Big Bang Theory and Young Sheldon, to create a world where a nun (actress Betty Gilpin) must do battle against an all-powerful A.I. Fortune recently sat down with Lindelof and Hernandez to ask them on camera about the ideas behind the show and how they relate to today’s A.I. technology. Check out the video here.
With that here’s the rest of this week’s A.I. news.
Jeremy Kahn
@jeremyakahn
jeremy.kahn@fortune.com
A.I. IN THE NEWS
U.S. Supreme Court declines to hear A.I. patent case. The court declined to hear a case arguing for A.I. algorithms to be recognized as inventors on patent filings, Reuters reported. The decision means that lower court rulings affirming that only “natural persons” (i.e. humans) can be listed on patents as inventors stand. The case was brought by Stephen Thaler, founder of Imagination Engines, who tried to register a number of patents globally listing DABUS, a piece of A.I. software he created, as the inventor.
Google merges its Brain and DeepMind A.I. research units. Google has decided to merge its two A.I. research arms, Google Brain and DeepMind. London-based DeepMind was acquired by Google for a reported $650 million in 2014 but operated more independently from its parent company than its sister and erstwhile rival research unit, Google Brain. Demis Hassabis, DeepMind’s cofounder and CEO, will lead the newly merged A.I. research group, which will be called Google DeepMind. Jeff Dean, who had overseen Brain as head of Google Research, will become Google’s chief scientist. The merger is aimed at focusing Google’s A.I. research more on direct product applications as the company faces stiff competition from Microsoft, OpenAI, and a host of startups, according to a story in the Financial Times.
Microsoft is testing its own A.I.-specific computer chips. The tech giant is testing its own custom-designed A.I. processors, called Athena, in its data centers, and is considering deploying them more broadly, The Information reports, citing two unnamed sources with direct knowledge of the project. The chips, which Microsoft has been developing since 2019, could help the company reduce its dependence on Nvidia’s hardware and might save the company money as the costs associated with running A.I. applications soar, thanks largely to Microsoft’s integration of OpenAI’s generative A.I. systems into many Microsoft applications.
Microsoft partners with electronic health records giant Epic on A.I. health care applications. Microsoft and Epic are collaborating to apply generative A.I. to health care using Microsoft's Azure OpenAI Service, the two companies announced. The partnership aims to help developers use OpenAI’s GPT-4 to create features for Epic’s software, including the ability to summarize patient notes and extract information from health records.
Stability AI launches its own large language models and chatbot. The London-based startup that took the A.I. world by storm this past summer when it released Stable Diffusion, a powerful text-to-image generation system that was open source and free to use, has decided to get into the generative language game too. It released two versions of StableLM—one, a 3 billion parameter model and the other a 7 billion parameter model (both small by the standards of LLMs)—and promised 15 billion and 65 billion parameter versions soon, The Verge reported. It made the models open source for both research and commercial uses. But even Emad Mostaque, Stability’s founder and CEO, admitted that StableLM is not yet as capable as many other chatbots. You can play around with it here.
EYE ON A.I. RESEARCH
A powerful computer vision foundation model from Meta. Meta’s A.I. research lab released a new family of powerful computer vision models called DINOv2. DINO is a self-supervised model that uses the same sort of Transformer design that underpins the broader generative A.I. boom. But unlike other foundation computer vision models that recently have been trained on images and caption information, DINO does not rely on text captions or text labels. DINO can perform a wide range of computer vision tasks without any specific training or fine-tuning for each task. Its capabilities include image classification, action recognition, image segmentation, depth estimation, and more. According to Meta’s researchers, DINO version 2 performs well on image types that were not included in its training dataset, for instance being able to predict depth in paintings.
One area where the researchers already see applications for the model is in mapping forests for carbon offset projects. “Our method enables large scale analysis of high resolution imagery, determining forest canopy height with sub meter resolution,” Meta wrote in a blog announcing the new DINO models. It also said that in the future DINO could improve medical imaging analysis, analysis of crops from satellite and aerial imagery, and possible applications in helping to generate virtual worlds for the metaverse.
Meta has made DINO freely available to developers as an open-source project in a variety of different model sizes. You can try out interesting demos of the DINOv2 model here.
FORTUNE ON A.I.
‘Feel free’: Musician Grimes is okay with others using A.I. to create songs in her voice and will split any royalties with them—by Prarthana Prakash
Snap’s ‘My AI’ chatbot tells users it doesn’t know their location. It does—by Jeremy Kahn and Kylie Robison
CEO is so worried about remote workers using A.I. and doing multiple jobs he threatens to increase quotas by ‘30 to 50 times our normal production’—by Steve Mollman
Google will offer ad clients A.I.-generated marketing campaigns similar to ones created by humans at agencies—by Steve Mollman
BRAINFOOD
What impact will the generative A.I. have on productivity? And what will it do to jobs and wages? Those are some of the most pressing questions for CEOs, economists, and policymakers as they grapple with the rapid rollout of generative A.I. applications across industries. This past week, a fascinating research paper provided some intriguing clues to what the future may hold. The working paper was coauthored by Erik Brynjolfsson, an economist at Stanford’s Human-Centered AI Institute (HAI), and Danielle Li and Lindsey Raymond, both researchers at MIT’s business school, and published on the National Bureau of Economic Research’s website. It looked at call center agents working for an unnamed Fortune 500 company. Some of the agents were given access to a generative A.I. system that provided recommended language for the agent to use based on the dialogue with a customer. Others were not. The study compared the performance of agents before and after being given access to the A.I. language model’s recommendations, as well as comparing it to the performance of those without access to the A.I. software. Overall, it found that use of the generative A.I. system improved the agents’ collective productivity—in terms of inquiries successfully resolved per hour—by 14%. The least experienced and weakest performing agents saw the biggest productivity gains, with the resolutions per hour boosted by a whopping 35%.
But, intriguingly, for the most skilled agents, generative A.I. provided no discernable increase in their productivity. In fact, the researchers found it might have even diminished their productivity slightly. The researchers speculated that this may have been because the skilled call center agents—many of whom already used Excel spreadsheets to record phrases that they had found particularly useful in dealing with particular types of customers or inquiries—found the suggestions made by the A.I. software distracting.
The authors point out that their findings might have significant implications for the way companies design compensation schemes. The call center operators, for example, were essentially graded on a curve. Their compensation was tied to how much better they performed compared to the average agent. By lifting average productivity, the generative A.I. system could actually result in lower compensation for the top agents. Would the call center company decide they wanted to retain these experienced and highly productive agents and change the compensation system? Or might a company decide instead that because the generative A.I. system was particularly good at bringing the least experienced and weakest performing agents up to an average level, it made better business sense to hire more inexperienced but low-wage agents and just use the generative A.I. system to ensure they performed at an average level?
Well, as one CEO I mentioned the study to this week told me, knowing the call center industry and its focus on cost control, he could guess which option most call center companies would choose. The implications could be profound if similar effects occur in other industries as generative A.I. is rolled out. The overall effect may be, as MIT economist David Autor has argued, not widespread job losses, but widespread wage depression.
This is the online version of Eye on A.I., a free newsletter delivered to inboxes on Tuesdays. Sign up here.