CEO DailyCFO DailyBroadsheetData SheetTerm Sheet

Why the C-Suite is now overseeing corporate A.I. projects

June 23, 2020, 3:42 PM UTC

Top executives now oversee most artificial intelligence projects, underscoring the importance C-suite managers place on A.I. for their companies.

In a survey of companies released Tuesday about the state of A.I. in business, 71% of respondents said that their company’s A.I. projects were “owned” by C-level executives, which include the likes of CEOs, chief financial officers, and chief technology officers. It’s a big jump from last year’s survey, conducted by the data training and annotation firm Appen, that found that the C-suite oversaw 39% of A.I. projects.

Appen CTO Wilson Pang told Fortune that the findings, based on responses from nearly 370 companies, highlight a “pretty significant change” for businesses that are pursuing machine-learning projects for tasks like forecasting sales and developing more powerful products. 

Pang said that top-level executives are increasingly spearheading A.I. projects because these undertakings are so encompassing, involving numerous corporate departments like finance and data analysis to work together. It’s typically the C-suite leaders who have the clout and resources to create the so-called “cross-functional teams” that can “map the business problems to an A.I. project,” Pang said.

Dell Technologies CTO John Roese, who was not involved with the Appen survey, told Fortune that, anecdotally over the past year-and-a-half, he’s noticed that more C-level executives are leading corporate A.I. projects instead of lower-level management or lone IT departments. As machine learning has becoming more prominent at businesses, Roese has also seen that executives are becoming increasingly familiar with A.I. jargon and obscure A.I. lingo.

Roese credits the Google-created TensorFlow software, used in deep learning projects, as helping popularize A.I. to the executive world. While TensorFlow was once perceived as “mad science,” it’s “no longer some weird thing,” Roese said. 

But just because corporate executives know what the words “natural-language processing” refers to (it means computers that understand language, in case you were wondering), that doesn’t mean they know how A.I. systems actually work or how they fail. 

Appen’s survey, for instance, also revealed that executives and technologists differ in what they consider to be the biggest bottlenecks to their A.I. projects. Although technologists and executives both agree that lack of talent is the top hurdle, the survey showed that technologists are more likely to perceive low-quality data and poor information management techniques as hurting A.I. projects.

Technologists, it appears, are more familiar with the adage, “garbage in, garbage out,” referring to the notion that good analysis requires good data. Pang said that many corporate executives believe that once they train their machine learning models, “they think it’s done.”

“In reality, most A.I. models are never done,” Pang said. “Data is never enough there.”

Jonathan Vanian 
@JonathanVanian
jonathan.vanian@fortune.com

A.I. IN THE NEWS

What comes next? While Microsoft and Amazon have temporarily halted their practice of selling facial recognition technologies to law enforcement, there are still many facets of the controversial technology that the companies are not addressing, Bloomberg News reported. The article details the companies’ lack of transparency regarding definitions of “law enforcement,” which could mean local or federal enforcement agencies, how they can ensure that police departments don’t use their facial-recognition services through existing municipal government contracts, and other technologies besides facial recognition that can “enable police surveillance,” like Amazon’s Ring security products. Ring, Bloomberg reports, “runs a program that lets police departments and other law enforcement agencies—some 1,300 and counting—request footage from users.”

Facebook A.I. leader on facial recognition. Facebook vice president and chief A.I. scientist Yann LeCun wrote about facial recognition technology in the aftermath of companies like Amazon and IBM altering their policies of selling the technology to law enforcement as protests against police bias take place across the world after the killing of black Minneapolis resident George Floyd by a white police officer while in custody. “I applaud Amazon and IBM for stopping to offer face recognition in their cloud services,” LeCun wrote. “I applaud Google even more for having decided against it in the first place. Large-scale facerec should not be made available to everyone.”

LeCun said that he did not realize until 2014 that the deep learning technology of convolutional neural networks “could be used effectively for face detection.” He said that while the technology could be used to “save lives,” it's also used by some authoritarian governments to spy on their people and to control protest movements.” He added: “Sadly, it's also used by less-authoritarian government under the pretext of crime prevention.”

Facebook goes big on maps. Facebook bought the mapping tech startup Mapillary, but did not disclose how much it paid for. Mapillary CEO wrote in a blog post Jan Erik Solem that his startup will be part of the social networking giant’s “open mapping efforts.” “As some of you know, Facebook is building tools and technology to improve maps through a combination of machine learning, satellite imagery and partnerships with mapping communities, as part of their mission to bring the world closer together,” Solem wrote. Mapillary was involved with some self-driving car technology projects, and while it was one a promising European tech startup, it’s now part of a U.S. tech giant.

The Pentagon wants to bulk up on A.I. Lawmakers including senator Rob Portman from Ohio have introduced the Artificial Intelligence for the Armed Forces Act, intended to boost the number of Pentagon A.I. experts, among other measures, Nextgov reported. “This bipartisan legislation builds on the Commission’s efforts to strengthen the AI capabilities of our military by enabling the increased hiring of AI and cyber professionals,” Portman said in a statement.

Leading A.I. event, now online. The Neural Information Processing Systems Foundation Board said its upcoming NeurIPS 2020 conference will be an entirely virtual event, highlighting how many in-person conferences are moving online due to the coronavirus pandemic. NeurIPS is one of the most popular A.I. conferences where leading researchers present their most impactful A.I. papers and many businesses try to recruit data scientists. Eye on A.I.’s Jeremy Kahn previously reported about CognitionX, which hosts “one of the world’s largest annual gatherings devoted to the impact of artificial intelligence,” going totally virtual.

EYE ON A.I. TALENT

Databricks hired Swee Lim to be the data specialist’s vice president of architecture, Databricks CEO Ali Ghodsi told Fortune during an interview. Lim was previously a vice president at LinkedIn, where he was also the professional networking service’s “only distinguished engineer,” Ghodsi said. Lim will play a major role in spearheading Databricks’s technology initiatives and he will report to Ghodsi.

Ghodsi said that LinkedIn executives like former CEO Jeff Weiner hired Lim when a lot of LinkedIn’s internal technology was powered by pretty much “one data center.” Lim was also the chief architect of Yahoo! search technology and a distinguished engineer at Sun Microsystems.

In 2013, University of California, Berkeley computer scientists including Ghodsi created Databricks to make a business around their creation, the open-source Spark data processing service. Since then, Databricks, has expanded from Spark, and now works on various data management, analytics, and machine learning services.

‘EARLY DAYS’ FOR A.I. ON THE EDGE

Despite heavy marketing from tech companies about the benefits of using machine learning for so-called edge computing, generally referring to processing data where the data is generated versus in a data center, many hurdles still remain. In a webinar hosted by former Fortune editor Stacey Higginbotham and her Internet-of-things news site, Shell’s general manager of data science Dan Jeavons explained the numerous challenges of using machine learning techniques at the “edge.” Jeavons said:

“Just being able to manage thousands of models is a real logistical headache. And it’s something that we’ve been trying to figure out,” he said. “If you add to that thousands of models running in thousands of edge devices where latency may be a problem, how do you manage those sorts of logistics? And how do you maintain that at scale?”

EYE ON A.I. RESEARCH

Some A.I. research isn't as game-changing as it claims. Peer-reviewed academic journal Science published a post that’s a helpful overview of some of the problems in current A.I. research, such as many technologists making profound claims that can’t be verified. As Science explains, many researchers are merely making “tweaks” to various deep-learning systems rather than producing game-changing technologies. Massachusetts Institute of Technology computer scientist professor John Gutta explains in Science:

Guttag says there’s also a disincentive for inventors of an algorithm to thoroughly compare its performance with others—only to find that their breakthrough is not what they thought it was. “There’s a risk to comparing too carefully.” It’s also hard work: AI researchers use different data sets, tuning methods, performance metrics, and baselines. “It’s just not really feasible to do all the apples-to-apples comparisons.”

A.I. still doesn't know what to do with nonwhite people in images. A paper published by researchers from Duke University about using neural networks to upsample and improve low-resolution photos has inadvertently started new conversations within the A.I. community about bias in deep learning systems. When the deep learning system attempts to upscale photos of people of color, including former president Barack Obama and U.S. representative Alexandria Ocasio-Cortez, it converts their faces into visages that resemble Caucasians, underscoring the likelihood that the datasets used to train the system weren't diverse, among other potential issues.

FORTUNE ON A.I.

Honeywell claims to have created the world’s most powerful quantum computer—By Robert Hackett

How Google and Facebook’s 8,000-mile undersea data cable got caught in U.S.-China feud—By Naomi Xu Elegant

The insurance case that helped end the slave trade—By Jeremy Kahn

Apple debuts Translate app at WWDC2020—By Jonathan Vanian

BRAIN FOOD

Intelligenza artificiale. The Brookings Institution think tank is producing a series of articles covering A.I. governance and policy. In one recently published piece from the series, Brookings experts examine the A.I. policies and plans of several countries and found Italy to have "the most comprehensive plan," followed by France, Germany, New Zealand, and the United States.

From the piece:

As we reviewed the contents of the plans, it was striking to see that the most common elements covered data management, capacity-building programs, and governance dilemmas. The need to address privacy and data-usage regulations regarding the design, deployment, and utilization of AI systems was a common theme.

Governments recognized that they have an important role to play in building platforms and programs that support data sharing between the public sector and external stakeholders to speed up AI innovation.

Bye, Baidu. Wired published an article about Chinese search giant Baidu leaving the tech consortium Partnership on A.I. (PAI), underscoring political tensions between the U.S. and China about A.I.

Baidu was the lone Chinese tech company involved with the PAI, which includes members like Apple, Google, Microsoft, and IBM. While both Baidu and PAI downplayed the significance of Baidu’s departure in public statements,Wired noted that “the withdrawal coincides with increasing criticism of Chinese AI companies, and a more hostile attitude in Washington."