CEO DailyCFO DailyBroadsheetData SheetTerm Sheet

Wharton’s A.I. expert predicts the future of artificial intelligence in business

June 2, 2020, 2:07 PM UTC

This is the web version of Eye on A.I., Fortune’s weekly newsletter covering artificial intelligence and business. To get it delivered weekly to your in-box, sign up here.

Kartik Hosanagar has taught a course on the business impact of emerging tech for 17 years at the University of Pennsylvania’s Wharton School of Business. Lately, interest in the portion of the class that deals with artificial intelligence has been so great that Hosanagar and Wharton last month announced a whole new initiative called Wharton AI for Business.

The project includes a new course on the business implications of A.I. as well as a guest lecture series. The whole thing is backed by a $5 million gift from Tao Zhang and Selina Chin, the Wharton alumni couple who founded the food delivery app Dianping and run the Singapore-based Blue Hill Foundation, respectively.

I spoke to Hosanagar about what he sees happening with A.I. in business, especially in light of the pandemic. He tells me the immediate pressures—both practical and financial—may make it difficult for companies to think more strategically about how to use A.I.

He says companies will be thinking about how to use A.I. to help their employees, many of whom are working from home, be more productive. That may involve things like automating scheduling and I.T. support. And many companies will be looking for ways in which A.I. can save them money. “I think there are going to be a lot of cases of using A.I. for cost cutting and efficiency,” he says. He predicts factory automation and robotics will accelerate.

These efforts, however, are not where the real gains from A.I. lie. The most successful companies, he says, will be those that are able to take a big step back and think about how they can use A.I. to do something far more strategic. “The question is, what can you do uniquely with that technology that everyone else can’t do?”

He uses the example of streaming platforms such as Netflix or Amazon and entertainment companies such as Disney. They face a dilemma because social distancing has disrupted their content creation pipelines: With actors and film crews unable to work in close proximity, production on many shows has had to pause. That may mean a looming gap in new content.

Hosnagar says studios could turn to advanced A.I., similar to the technology used to create deepfakes, to create visually realistic content without actors or film crews. “Those that are able to do this might be able to get more films and TV shows out in a period of scarcity,” he says. “But not every studio can change production the same way.”

Figuring out how best to use A.I. strategically requires a realistic understanding of what your company is actually capable of —not only whether you have the right tech chops, but what your customers want and how this fits with your overall brand and market position.

Much has been written about how Covid-19 may result in further concentration of corporate power. But Hosanagar thinks the pandemic may actually present an opportunity for startups too by removing some of the advantage of having vast amounts of historical data.“With this discontinuity, everyone with those massive datasets has to be cautious,” he says. “It doesn’t totally completely level the playing field, but it does bring us closer.”

Like some other folks I’ve spoken to for this newsletter, Hosanagar thinks the pandemic will accelerate the use of unsupervised learning algorithms, which don’t need big, labelled data sets to train on. He also thinks there is a greater role for reinforcement learning, where A.I. software learns from simulated experience. And he thinks that A.I. systems in the future will have to incorporate a wider diversity of data to become more inclusive of disruptions such as those caused by Covid-19.

As for business strategy, Hosnagar says we are still a long way from A.I. being able to make those kinds of recommendations. “A.I. excels at tasks that are highly repetitive,” he says. “Corporate strategy is not highly repetitive. It is highly creative and there is a lot of projecting out scenarios in situations where there is not a lot of data.” The big decisions that shape the fate of companies will remain in human hands, at least for the foreseeable future.

And with that, here’s the rest of this week’s A.I. news.

Jeremy Kahn
@Jeremyakahn

A.I. IN THE NEWS

Trump signs order potentially banning many Chinese students from studying A.I. in U.S. graduate programs. Donald Trump signed an executive order that took effect on Monday barring anyone affiliated with companies or entities that have assisted China's "military-civil fusion strategy" from obtaining a visa for graduate study in the U.S., according to this story in The National Law Review. The order would apply to many who have worked for Chinese tech companies or universities that have received funding or done work for the Chinese government. The ban is not as broad as one pushed by Republican Senator Tom Cotton that would bar all Chinese students from studying any S.T.E.M. subject in the U.S., but it is a step in that direction. 

ACLU sues Clearview over "privacy destroying" face recognition technology. The American Civil Liberties Union filed suit against the controversial startup in Illinois, which has strict laws protecting citizens' biometric data from being used without their consent. The company has claimed its software can recognize almost anyone—but it gathered its data by scraping images from publicly accessible social media without asking permission and often in apparent violation of those sites' terms of service. "If left unchecked, Clearview’s product is going to end privacy as we know it,” ACLU lawyer Nathan Freed Wessler told The New York Times. A company lawyer claimed it had a First Amendment right to gather the data.

Amazon in talks to buy self-driving startup Zoox. At least, that's what The Wall Street Journal reported last week. The paper said that the Everything Company was in "advanced talks" to acquire the once high-flying autonomous car company for a valuation less than the $3.2 billion it achieved in a 2018 funding round, citing people familiar with the matter. Analysts told my Fortune colleague (and Eye on A.I. partner in crime) Jonathan Vanian that the move may be aimed at Amazon's ambition to one day operate a fleet of driverless delivery vehicles. 

Microsoft replaces journalists with A.I. software. Microsoft is laying off "dozens" of reporters and editors who work for its Microsoft News and MSN.com products and replacing them with A.I. software that can automatically select news stories and compose headlines. "Microsoft has been using A.I. to scan for content and then process and filter it and even suggest photos for human editors to pair it with," according to a story in The Verge. Business Insider, which first reported the layoffs, said that about 50 journalists in the U.S. would be affected, while The Guardian said that 27 journalists who work for PA Media in the U.K. would be losing their jobs. Formerly known as the U.K. Press Association, PA Media is the British equivalent of the Associated Press and is owned by a consortia of British newspaper groups. It had a contract with Microsoft to curate MSN content. 

EYE ON A.I. TALENT

News Break, a Silicon Valley-based news aggregator backed by several Chinese investors including gaming company Net Ease, has appointed Harry Shum as chairman, according to The South China Morning Post. Shum had been the long-time head of A.I. at Microsoft until retiring earlier this year.

Popular video conferencing software company Zoom has hired Velchamy Sankarlingam as president of product and engineering, according to a company blog post. Sankarlingam was previously senior vice president, cloud services development and operations, at VMWare.

IBM has promoted Sumit Gupta to vice president of A.I. strategy and chief technology officer, data & A.I., according to his LinkedIn page. He had previously been Big Blue's VP of products, A.I., machine learning and HPC, Cognitive Systems.

Everguard.ai, an Irvine, California, company that sells A.I.-based worker safety technology, has appointed Mark Bula as chief strategy officer and general manager of its steel industry vertical, according to a report in trade publication AI Authority. Bula was previously part of the management team at Big River Steel.

EYE ON A.I. RESEARCH

OpenAI courted controversy last year with a deep learning algorithm called GPT-2 that could compose long passages of relatively coherent prose from a simple human-written prompt. It was one of several very large pre-trained language models that have shown lots of promise in the past few years.

Well, now OpenAI has a new language model, GPT-3, that is 100 times larger: it takes in some 175 billion different variables during training, making it by far the largest language model ever created (it is also about 18 times larger than Facebook's recent Blender chatbot). Training such a large model requires access to a massive supercomputing cluster, which is what Microsoft has apparently built for OpenAI, according to this story in The Register. One of the interesting things about GPT-3 is that it is the first real evidence of exactly what OpenAI's gaining from its relationship with Microsoft—which has invested $1 billion and signed a strategic partnership with the company. (For more on this, please read my story in the January issue of Fortune.)

OpenAI and other researchers have been trying to show that these gargantuan pre-trained language models come closer to exhibiting a kind of general intelligence—automatically learning all kinds of skills and relationships from ingesting so much language data. The idea is that this enables a single algorithm, with little or no additional training, to accomplish skills that would typically require many different pieces of specialized software. It also means these models can be more easily fine-tuned to achieve excellent performance on some specific tasks. In its research paper on GPT-3, OpenAI reports that the algorithm is able to do all sorts of things besides just generating long passages of prose (it's very good at that, by the way: OpenAI said news stories composed by GPT-3 fooled most human evaluators into thinking they were written by humans). It can also do machine translation, answer trivia questions, and even do basic math.

But the researchers also found some things that ought to ring alarm bells for anyone who thinks that building larger and larger models is going to lead to general intelligence. While GPT-3 performed better than GPT-2 at many tasks, it did not perform 100 times better. In fact, on many skills, such as factual and explanatory answers, language translation, common sense reasoning, and analogies, GPT-3's abilities were better than GPT-2's, but not particularly impressive. So while there do seem to be marginal gains from larger models, there also seems to be clear evidence of diminishing returns.

It seems likely some fundamentally different technique is going to be necessary to achieve the kind of artificial general intelligence that is OpenAI's stated goal.

FORTUNE ON A.I.

Cleaning robots have their moment in the fight against COVID-19—by Jeremy Kahn

What Amazon would gain by buying self-driving car startup Zoox—by Jonathan Vanian

Coronavirus apps’ fatal flaw: Almost everyone has to use them or they won’t work—by Jeremy Kahn

The U.K. may soon change its mind about Huawei, delighting hawks like Trump—by David Meyer

Phone sales plummet amid the coronavirus lockdown—by Aaron Pressman

BRAIN FOOD

The artist Gretchen Andrew is a tech provocateur. The L.A.-based Andrew has established a name for herself by impishly gaming Google's search algorithms, making it seem that her art has been exhibited at well-known contemporary art festivals like Frieze Los Angeles and appeared on the cover of Art Forum. But Andrew isn't trying to con anyone—other than the algorithm, she told me when we spoke last week. (The websites she creates explicitly state they aren't really for Frieze or Art Forum. "Any confusion is likely caused by inherent shortcomings within language exacerbated by technology’s inability to handle nuance," says her Frieze website.)

Andrew, who studied information systems as an undergraduate student and worked for a short while for Google (although never on its search team), says she wants to create "counter-narratives about how things work—how the art world works, how tech works, how the Internet works and what artificial intelligence really is."

Much of Andrew's work intersects with the field of A.I. research known as "adversarial attacks"—ways to fool A.I. algorithms into misclassifying things, often by subtly toying with the training data the algorithm is fed or finding edge cases that the algorithm can't handle. (One of the most famous examples is the M.I.T. researchers who, in 2018, were able to trick a Google vision algorithm into misclassifying images of a turtle as a gun.) Andrew says she prefers trying to game Google's search algorithm because anyone can do it—it requires no special access to the algorithm's training data and no machine learning expertise.

The ease with which she can insert herself into history, she says, ought to be a wake up call for everyone. "I can manipulate the global internet – this is not just about Russian troll farms," she says. (In fact, one of her current projects involves gaming the search results for "the next American president.")

She says she also likes to point out the failings of natural language processing algorithms. For instance, her Frieze website works, in part, because she inserted , images of Frieze carpeting into digital images of a gallery space. The A.I. can't understand the difference in context and so scores Andrew's website higher for relevance. "These classifiers are ultimately being used to make a binary decision, but language is not binary," she says.

Andrew says that despite supposed improvements in Google's search algorithm—including last year's overhaul of the search algorithm to use the large pre-trained language model BERT—she's not noticed any change in how easily she can fool the search engine into ranking her websites at the top.

One of the things she finds most troubling about today's A.I. is its lack of imagination: It can only make predictions based on historical data or generate a novel image that is derivative of the images it has been trained on. "Artificial intelligence is still educated in an entirely backward looking way because we are only able to learn what has already been," she says. "My 'Vision Boards' provide the possibility of what could be." Her websites, featuring objects she calls "Vision Boards" that combine elements of painting, charcoal drawing and collage on canvas, mock the intelligence of artificial intelligence

So much for A.I. getting smarter.

(If you want to check out more of Andrew's work, she will have a solo show opening at the Monterey Museum of Art this autumn—it will, of course, include a web component for those unable to attend in person. Her work has been supported by the Mozilla Foundation and the Wikipedia Foundation.)