This is the web version of Eye on A.I., Fortune’s weekly newsletter covering artificial intelligence and business. To get it delivered weekly to your in-box, sign up here.}
Professional services firm Deloitte has a new report out this morning about A.I. adoption by corporations, and it is worth reading. (You can view the report here.) The main takeaway is that A.I. is becoming increasingly ubiquitous, and as it does, companies are going to have to think hard about how they can use the technology to differentiate themselves.
“Companies feel their window of competitive advantage is eroding,” Jeff Loucks, who heads Deloitte’s Center for Technology, Media and Telecommunications, which conducted the research behind the report, tells me. “It is harder to gain an advantage by simply being in the game.”
The report classifies businesses into three broad categories:
- In rough terms, about a quarter of businesses are what Deloitte calls “seasoned” A.I. adopters. These companies have put at least five A.I. systems into production and have a high degree of expertise in how to build, maintain and manage them.
- About 50% of companies Deloitte defines as “skilled” adopters: They’ve put between one and five major A.I. systems into production and have some expertise at how to run and manage them.
- Then there’s the final quarter of firms, which Deloitte calls A.I. “starter,” who are still playing around with pilot projects and don’t have as much confidence in how to build or manage their A.I. capabilities.
Loucks and his colleague Nitin Mittal, the partner in charge of what Deloitte calls its analytics and cognitive business, say the firm has noticed a few key differences between those veteran A.I. adopters and the rest of the pack: The more experienced business are more likely to see A.I. as a strategic technology. They are more likely to be using it to pursue new business models and offer new kinds of services, Loucks says, as well as to increase revenues in existing ones. In contrast, the A.I. novices, he says, are more likely to be focused on using A.I. to reduce costs and cut headcount.
That may explain why, while all companies are investing heavily in A.I., the more experienced firms are spending even more, with 68% saying they have spent more than $20 million. The A.I. veterans are also reporting faster gains from deploying A.I.: 81% said they expected to earn that investment back within two years.
The other big difference, Loucks says, is that the experienced firms have spent a lot more time thinking about what can go wrong—everything from algorithmic bias to operational disruption if the software fails to thinking about how to protect data from hackers—and have put in place frameworks, policies and procedures to mitigate those risks. Loucks says these companies are more likely to conduct audits of both their data and their algorithms, continually monitor their A.I. models to detect the situations where reality is deviating from the data the software was trained on (a phenomenon known as “model drift”), and train staff in issues around A.I. ethics.
That’s important because 56% of those surveyed told Deloitte that their organizations are slowing down the adoption of A.I. because of concerns about emerging risks. Regulation is also a big issue, with 57% of the executives Deloitte contacted saying they have “major” or “extreme” concerns about how regulation could affect their A.I. projects.
Deloitte also asked companies about their struggle to hire people with the right A.I. skills and shared some of those results exclusively with Eye on A.I. More than half of respondents—54%—said the skills gap was “moderate,” “major” or “extreme.” But, Loucks says, the number of companies saying they have an acute need for data scientists, A.I. researchers and machine learning engineers has actually come down slightly from last year’s survey. He says this is the result of the increasing ability of companies to implement off-the-shelf solutions for many machine learning problems that any software engineer can implement without specific data science or machine learning skills.
“But where skill gap has opened is around individuals with both business savvy and technology experience,” Mittal says. He says these people are essential to what he calls “mainstreaming” A.I. within an organization. These are the people who understand how A.I. can be used strategically by the business. They are also the people who lead the teams who actually have to use the A.I. software on a daily basis, and who will depend on its results.
As always, the business that wins is not necessarily the one with the best technology. It is about having the right people.
And with that, here’s the rest of this week’s A.I. news.
Jeremy Kahn
@Jeremyakahn
jeremy.kahn@fortune.com
A.I. IN THE NEWS
U.K. and Australia announce investigations into ClearviewAI. The top data privacy regulators in both countries announced investigations into how the controversial facial recognition startup gathers and uses data. The move follows an announcement from Canada's information commissioner that the New York-based company had agreed to stop operating in the country following a regulatory investigation. Clearview, whose software has been sold to a number of law enforcement agencies in the U.S. and around the globe, has said it initially built its huge database of faces by scraping publicly available social-media images from the Internet, sometimes in apparent violation of those sites' terms of service. The company is also facing lawsuits in the state of Illinois for violating its data privacy law.
The second case of a man wrongly arrested due to facial recognition software emerges. Earlier this month, news emerged about a Detroit man who had been wrongly arrested in January for theft after facial recognition software the police were using incorrectly identified him. At the time, it was thought to be the first documented case in the U.S. in which someone had been wrongly arrested due this kind of error. But now an even earlier case—also in Detroit—has surfaced, with The Detroit Free Press reporting that a 25-year old man was also wrongly arrested for theft in May 2019. In both cases, the men arrested were Black. Many commercially-available facial recognition software has been shown to be significantly less accurate in identifying dark-skinned people. In both cases, the men were eventually released and the charges against them dismissed once it became clear the police had the wrong person. But both cases raise worrisome questions about the continued police use of the technology.
Nvidia overtakes Intel to become the most valuable chipmaker. Nvidia, whose graphics processing chips have become a mainstay of A.I. computing, is now the world's most valuable semiconductor company, with a market capitalization of $257.8 billion, Reuters reports. The Santa Clara, California-based company has narrowly edged past Intel, which is worth about $252 billion. Intel, long dominant in personal computers and data center servers, has struggled to gain a foothold in new markets such as mobile, automotive, computer gaming and A.I.
International Baccalaureate algorithm is under fire. After the pandemic forced the cancellation of this year's International Baccalaureate (IB) examinations, which are taken by 170,000 students each year, the organization that runs the two-year certification program decided to use an algorithm to predict what grades the students would have gotten on the exam and just award them that grade. High school students taking the IB program use them in applying for university, and, in some cases, admissions offers are contingent on students receiving a certain grade. Well—and you didn't need an algorithm to predict this—the algorithm's decisions have made a lot of students and teachers unhappy, with many saying students received grades far below what the students and teachers expected, according to The Financial Times. The IB Organization has said very little about how the algorithm works, but said it took into account the individual student's performance on coursework, predicted grades, as well as how their school had done in the past. As Wired said in its story on the ensuing uproar, parents and teachers are left "wondering how the system was designed and tested, why its workings weren’t fully disclosed, and whether it makes sense to use a formula to determine the grades that can shape a person’s opportunities in life."
UiPath joins Europe's small group of technology decacorns. The robotic process automation company UiPath announced it raised an additional $225 million in a funding round led by venture firm Alkeon. Others participating include Accel, Coatue, Dragoneer, IVP, Madrona Venture Group, Sequoia Capital, Tencent, Tiger Global, Wellington, and funds and accounts advised by T. Rowe Price Associates. The company said the new funding valued it at $10.2 billion, making the company one of the few European technology firms to achieve that lofty valuation—and one of the very first to do so before being publicly listed. The company, which was already one of the fastest growing enterprise software companies in the world, says that the pandemic has given its growth even more of a boost as businesses turn to automation to cut costs and accelerate digital transformation efforts.
A.I. startup Anduril raises $200 million in a fund raise that values company at $1.9 billion. The A.I. startup, whose co-founders include Palmer Luckey, the eccentric entrepreneur who created VR company Oculus, received $200 million in additional venture capital funding from Andreessen Horowitz, joined by 8VC, Elad Gil, Founders Fund, General Catalyst, Human Capital, Lux Capital, and Valor Equity Partners. The funding round valued the company at $1.9 billion. Anduril sells A.I.-enabled drones and surveillance towers to the U.S. military and law enforcement agencies. For more on the deal, read my Fortune colleague Lucinda Shen, who covered the funding round in our sister newsletter Term Sheet.
EYE ON A.I. TALENT
Axiologic Solutions, a Fairfax, Virginia, company that sells IT solutions to U.S. national security and intelligence agencies has appointed Louis Chabot as chief technology officer, according to trade publication AI Authority. Chabot was previously lead solutions architect at Perspecta Inc.
Ava, a London-based security company, has appointed Rick Snyder to its board of directors. Snyder was previously a senior vice president at Cisco.
Cujo AI, a cybersecurity company in El Segundo, California, has appointed Jeremy Otis as general counsel, the company said. Otis had been legal director for the Americas for Wärtsilä Marine, a marine technology company.
Jobvite, an Indianapolis, Indiana, company which makes software to help companies with hiring, said it had acquired the A.I. and data science team at data analytics company Predictive Partner. As part of the acquisition, Morgan Llewellyn, Predictive Partner's CEO, will become Jobvite's chief data scientist.
Salesforce chief scientist Richard Socher, who has helped infuse A.I. into a number of Salesforce products and conducted A.I. research on natural language processing and other topics, announced on Twitter he is leaving the company after four years to start his own company.
EYE ON A.I. RESEARCH
How do you make A.I. explainable? That's a big question for anyone hoping to deploy machine learning systems in real life. The results of many neural network-based approaches to A.I. are difficult to explain, especially if one wants an explanation for why a model made one particular prediction in one particular case. And many machine learning models that are more interpretable don't perform as well as the black box models. The whole idea of explanation itself is also fraught, with the needs of engineers, data scientists, end users and the person who may be affected by the algorithm's prediction all having different needs for different levels of explanation.
Recently, the Partnership on AI in San Francisco, the Alan Turing Institute in London, and the University of Cambridge convened 33 academics, industry experts, legal scholars, and policymakers for a day-long workshop to explore the topic. They reported some of their findings in a paper published this past week on the research repository Arxiv.org. Here were some of their key takeaways:
- "Explainability tools cannot be developed without regard to the context in which they will be deployed."
- When developing explainable machine learning, consider what sorts of explanations are actually valuable to those who are going to be using the algorithm.
- Consider including key stakeholders in the development of explainable A.I. solutions so that their needs can be met.
- Develop a training program to help people understand what machine learning is and what its limitations are.
- Develop ways to quantify how certain an algorithm is of any given prediction and ways to convey that uncertainty to stakeholders who are using the algorithm. (This will give people a better sense of when they might need to overrule the prediction the model is making.)
- Creating flexible explanation techniques that stakeholders can toggle, and building models that can update based on stakeholder feedback, will encourage adoption.
- "When designing an explainable ML tools, include how the explanations might be acted upon as a central design question. If the explanations motivate the average user to game or distrust the system, perhaps it points to the model making predictions on unfair/unimportant attributes."
- Since people will change their behavior as a result of deploying the algorithm, "successful deployment will require frequent accounting for these adaptations."
FORTUNE ON A.I.
Global IT spending forecast to fall 7% amid COVID-19 cuts—by Jeremy Kahn
Drone industry flies higher as COVID-19 fuels demand for remote services—by Aaron Pressman
Upstart electric-truck maker Rivian raises $2.5 billion in new backing—by Aaron Pressman
Do you trust Big Tech with your personal health data?—by Lance Lambert
BRAIN FOOD
The International Conference on Machine Learning (ICML), always one of the year's biggest A.I. research confabs, is underway. The conference had been scheduled to take place in Vienna, Austria, but moved online due to Covid-19.
There are too many interesting talks and papers presented at the conference to do much—or any—of it justice here. But I will just highlight one of the invited talks because it seemed particularly inspiring in what can seem like dark times: Lester Mackey, a machine learning researcher at Microsoft as well as an adjunct professor at Stanford University, was invited to give a talk on "Doing Some Good With Machine Learning."
Mackey says he actually became a machine learning researcher because he wanted to make a positive impact on the world. In his talk, he provided a "tour" of his various efforts over the past decade, as he progressed from grad student to PhD. to established researcher, to use machine learning for social good.
His track record is impressive. He's been involved in projects to improve the seismic detection of violations of nuclear proliferation treaties, to better predict the progression of patients with motor neurone disease (ALS), to improve financial coaching for those with low-incomes, to predict and prioritize patients for early healthcare interventions, to track opiate consumption worldwide, to help non-profits better understand client feedback, to better manage scarce water resources in the western U.S. with more accurate short-term climate forecasting, and, finally, to help forecast the progression of the COVID-19 pandemic in the U.S.
Mackey exhorted his listeners to take on four challenges:
- "Let's teach more machine learning for good." Mackey suggested that classes should use social impact problems as teaching examples in machine learning courses. This essentially can kill two birds with one stone: The students learn ML and maybe the world gets a little bit better in the process if the models the students build can have real social impact.
- "Let's publish machine learning for good." He suggested that conferences have social impact tracks where papers are assessed as much on the social impact of the research as on the novelty of the algorithms being used.
- "Let's incentivize machine learning for good." Mackey drew a comparison to law firms that encourage their attorneys to work on pro bono cases. In some firms, pro bono is counted equally with commercial work in evaluation and compensation decisions. He said that technology companies and universities should similarly incentivize researchers to work for social good.
- "Let's prioritize machine learning for good." "What if everyone listening to this talk dedicated 5% or 1% of their time to be a positive force for social change? What would the world look like? What could it look like? Let's find out."