CEO DailyCFO DailyBroadsheetData SheetTerm Sheet

3 things to watch for in A.I. in 2021

January 12, 2021, 4:59 PM UTC

New developments in artificial intelligence may seem trivial compared to recent events like the Capitol riots in which pro-Trump rioters attempted to subvert the election.

But 2021 will likely be a big year for A.I., and with a new White House administration soon in place, there may be a clearer set of national A.I. policies that will trickle down to the business world. 

Here are three key themes to watch out for:

Federal A.I. funding gets a boost

On New Year’s Day, the U.S. Senate voted to overturn President Trump’s veto of the National Defense Authorization Act and authorize $741 billion for defense spending, including the creation of a number of A.I.-related polices. Among the reasons Trump opposed the defense bill was the absence of a provision to repeal Section 230, which gives legal protections to Internet companies that host user-generated content. 

Although the defense bill was mostly geared toward military spending, it did contain a number of non-defense related A.I. initiatives, as Stanford University’s Human-Centered Artificial Intelligence group outlined. For instance, the bill would create a “National AI Initiative” that would coordinate A.I. research and development between “civilian agencies,” the Defense Department, and intelligence agencies. It would also create a National AI Initiative Office that would serve as a hub for federal A.I. projects and for public and private companies.

It’s a major step for those who believe the Trump administration didn’t do enough to ensure that the U.S. remains an A.I. powerhouse as challengers like China push ahead with their own A.I. initiatives. 

It also sets the stage for the incoming Biden Administration to take a more proactive role in creating Federal A.I. policies and perhaps increase A.I. research funding, as the Biden campaign said would be crucial.

Facial-recognition software on the rise

The Biden Administration, specifically Vice President–elect Kamala Harris, has highlighted the problem of facial-recognition software working better on white males than women and people of color, and the consequences to society as the use of the software grows. 

Expect more state and local governments to create their own facial-recognition laws as the lawmakers work on more comprehensive facial-recognition rules. There is no sign that use of the controversial software is slowing down, as reports have emerged that law enforcement are using the technology to identify suspects in the recent D.C. riots, even though the software has previously misidentified criminal suspects of color.

And as employees return to work after COVID-19, companies could spend more on facial-recognition software as a security tool to identify workers, pitching the software as a safe way to track and monitor staff.

Business gets some help from A.I. writers

The A.I. firm OpenAI captured the business and research world’s attention with its high-profile GPT-3 language software that outperforms previous technologies in generating readable text. The software is just one of many so-called natural language processing systems that are getting better at writing coherent sentences and analyzing documents.

There’s no sign that progress in A.I. language systems is slowing, and while these software systems may stumble at the many nuances of human language, they are getting better at summarizing complicated research and spotting patterns in speech that would otherwise go undetected. 

Expects businesses to increase their use of A.I. to analyze financial documents, sales calls, call center transcripts, and anything else that has to do with written language. 

Jonathan Vanian 
@JonathanVanian
jonathan.vanian@fortune.com

***

The societal reckoning over systemic racism continues to underscore the importance businesses must place on responsible A.I. All leaders are wrestling with thorny questions around liability and bias, exploring best practices for their company, and learning how to set effective industry guidelines on how to use the technology. Join us for our second interactive Fortune Brainstorm A.I. community conversation, presented by Accenture, on Tuesday, January 26, 2021 at 1:00–2:00 p.m. ET.

A.I. IN THE NEWS

Salesforce takes action against the Republican National Committee. Salesforce’s ExactTarget email marketing firm has “taken action” against the Republican National Committee for sending inflammatory emails from the President Donald Trump campaign, Vice’s Motherboard tech news outlet reported. "The Republican National Committee has been a long-standing customer, predating the current Administration, and we have taken action to prevent its use of our services in any way that could lead to violence," Salesforce said in a statement. 

About face. A facial-recognition company, XRVision, told BuzzFeed News that a recent story published by the Washington Times alleging that the company’s software identified Antifa members during the Capitol riots was wrong. Instead, “XRVision’s software actually identified two members of neo-Nazi organizations and a QAnon supporter among the pro-Trump mob — not antifa members,” the report said. The Washington Times corrected its story and apologized to XRVision.

Cars will drive themselves in four more cities. Intel’s Mobileye unit said that it would expand its self-driving car testing to four more cities, including Detroit, Paris, Shanghai, and Tokyo, TechCrunch reported. A company executive said that Mobileye chose the new cities because of more favorable regulatory environments and the cities’ proximity to its automaker customers. The report added that "if the company can receive regulatory approval it will also begin testing on public roads in New York City."

A new self-driving car company to hit the road. Baidu and Geely, a Chinese automotive company, are partnering to create a new electric vehicle business, which will be an independent Baidu subsidiary, CNBC reported. While Baidu will focus on the self-driving car software, Geely will be responsible for manufacturing the automobiles, the report said.

EYE ON A.I. TALENT

University of Copenhagen associate professor Isabelle Augenstein will lead the university’s new research unit dedicated to natural language processing.

Lilium has picked Thomas Enders, a former Airbus chief executive, to be a board member of the startup, which is developing flying taxis. Eye on A.I.’s Jeremy Kahn reported on the news, describing its protoype small aircraft as “intended for short-hop intercity flights of up to 150 miles.”

From the article:

Enders’s joining the Lilium board is a sign of the growing maturity of flying-car startups. Once considered something out of science-fiction, efforts to build companies around flying taxis have attracted serious interest from investors, entrepreneurs, and aerospace engineers during the past five years.

EYE ON A.I. RESEARCH

Deep learning comes to drug repurposing. Ohio State researchers published a paper in Nature Machine Intelligence that probes the use of deep learning to discover how certain drugs can be used to treat diseases that they weren’t intended for. What’s fascinating is that the researchers “used insurance claims data on nearly 1.2 million heart-disease patients” as the basis of their testing, according to the university. This data was useful because it “provided information on their assigned treatment, disease outcomes and various values for potential confounders.”

The researchers essentially created a deep learning system that could run simulated randomized clinical trials for each drug they wanted to test that could be useful for treating coronary artery disease. They evaluated 55 different drug candidates and discovered 6 drug candidates that appear to improve the outcomes of patients with coronary artery disease even though the drugs haven’t previously been identified as useful for the purpose of treating the heart ailment.

FORTUNE ON A.I.

Is Apple teaming up with Hyundai on a self-driving, electric car? Don’t bet on it—By Jeremy Kahn

A.I. in the beauty industry: How the pandemic finally made consumers care about it—By Gabby Shacknai

YouTube is alone among big social media services in keeping Trump’s account openBy Danielle Abril

U.K. antitrust probe targets Google Chrome privacy changes—By David Meyer

Vaccinating the world against COVID is off to a slow start. These firms think A.I. and blockchain could help—By Jeremy Kahn

BRAIN FOOD

Let the jury decide. One of the major issues in A.I. in our current era is determining who is liable when an A.I. system fails. If a doctor, for instance, heeds the advice of an A.I. system that tells the physician to treat a patient with a certain dosage of medication and that patient suffers, who is responsible for the medical mishap?

Researchers from Georgetown University Law Center and Switzerland’s ETH Zurich decided to test that thought conundrum via a recently published paper in the Journal of Nuclear Medicine. The researchers polled 2,000 U.S. adults about a hypothetical situation involving a physician who consults an A.I. system to determine the appropriate drug dosage for an ill patient and found that the respondents were surprisingly “not strongly opposed to a physician's acceptance of AI medical recommendations,” according to a release by the Society of Nuclear Medicine and Molecular Imaging. In fact, the publication said that the “finding suggests that the threat of a physician's legal liability for accepting AI recommendations may be smaller than is commonly thought.”

The authors write:

Our results indicate that physicians who receive advice from an AI system to provide standard care can reduce the risk of liability by accepting, rather than rejecting, that advice, all else equal. However, when an AI system recommends nonstandard care, there is no similar shielding effect of rejecting that advice and so providing standard care.

Let’s hope this experiment remains a thought conundrum for legal scholars to debate.