CEO DailyCFO DailyBroadsheetData SheetTerm Sheet

Most executives are clueless when it comes to this aspect of artificial intelligence

April 12, 2022, 8:41 PM UTC

Most executives seem oblivious to the potential legal and reputational risks of using artificial intelligence in their businesses, despite increasing regulations requiring companies to avoid software that discriminates against certain groups of people.

Only 4% of corporate leaders said A.I. is a “significant” risk, according to a recent survey of 500 C-level executives by law firm Baker McKenzie. Meanwhile, just over half called the risk “somewhat significant,” 19% described it as “moderate,” and another 26% said it was “minimal.”

Bradford Newman, a Baker McKenzie attorney who specializes in A.I. and trade secrets, said the survey’s results, based on responses from companies with at least $10 billion in annual sales, show that executives are making a big mistake. Regulations, both existing and pending, could put companies in the legal hot seat if they fail to follow the rules.

For instance, a New York City law that takes effect in Jan. 2023 would regulate A.I.-powered hiring software. Companies using it must disclose the fact to job recruits who are based in New York City while companies that sell it must audit the technology to ensure it doesn’t discriminate against women and people of color.

Meanwhile, California’s Fair Employment and Housing Council published recently published draft regulations that would make it illegal for companies to use A.I.-hiring software that discriminates by race, gender, and sexual orientation, among other things.

Attorneys from the Davis Wright Tremaine law firm have said that California’s draft regulation is ambiguous to the point that it may cover any use of A.I. in any decision-making process, not just hiring. 

And last year, the U.S. Equal Employment Opportunity Commission (EEOC) created a task force that is focused on ensuring that corporate use of A.I. software for hiring and other employment matters complies with federal anti-discrimination laws. 

“Companies who are using and producing these tools need to get smarter real quick and make sure they have the right oversight and governance,” Newman said.

Considering all of the regulations, plus numerous research papers about A.I. bias that have made news headlines, executives should already be well aware of A.I.’s risk to their businesses. But as Newman explained, many corporate A.I. projects are led by technologists, and their primary concern is whether they can build A.I. that works and that is secure from hackers.

“And that’s really still the motivating factors within the halls of most corporations,” Newman said. “Bias, as you note, is way down there, because in most systems internally, legal and HR come in last.”

And while a cottage industry of A.I. bias experts and consultants have emerged in recent years, it remains to be seen how much of an impact they’ll have on making executives add fighting A.I. bias to their list of priorities, Newman noted.

“Does it actually get to where it needs to go internally?” Newman said. “That is the question, isn’t it, because it is big, big business, right?” 

Jonathan Vanian 
@JonathanVanian
jonathan.vanian@fortune.com

A.I. IN THE NEWS

It’s surreal as Salvador Dali. OpenAI, the A.I. company led by the former Y Combinator president Sam Altman, released its DALL-E 2 software that can automatically create photo-realistic imagery based on written prompts, Fortune’s Jeremy Kahn reported. OpenAI is pitching the research tool as useful to product designers and artists who can use the software to edit and create images. Researchers refer to DALL-E 2 as being multimodal, which means the A.I. understands both images and text and can discover patterns between the two.  

Amazon’s drone dilemma. Amazon has struggled to get its drone-delivery program off the ground, and some current and former employees worry that the company is taking “unnecessary risks” to get the project back on track, according to an investigation by Bloomberg News. Amazon now plans to expand its testing of drone deliveries in towns including College Station, Texas. and Lockeford, Calif. with the drones flying beyond the line of site of human observers, the report said. An Amazon spokesperson told Bloomberg that “No one has ever been injured or harmed as a result of these flights, and each test is done in compliance with all applicable regulations.”

The Vatican’s take on A.I. Father Paolo Benanti is the subject of a profile by the Financial Times that probes how the Pontifical Gregorian University ethics professor is advising Pope Francis and his team about A.I.’s ethical issues. One of the Vatican’s biggest concerns with A.I. is the possibility that advanced automation technologies could lead to more global inequality. Said Benanti: “Algorithms make us quantifiable. The idea that if we transform human beings into data, they can be processed or discarded, that is something that really touches the sensibility of the Pope.”

A.I. as a sales tool. Microsoft-owned LinkedIn was able to boost subscription sales to its professional social networking service 8% by using A.I., according to a Reuters report. The A.I. tool gives sales staff short explanations to sales staff about which customers they should target. From the report: Dubbed CrystalCandle by LinkedIn, it calls out unnoticed trends and its reasoning helps salespeople hone their tactics to keep at-risk customers on board and pitch others on upgrades.

EYE ON A.I. TALENT

DAZN has hired Sandeep Tiku to be live sports streaming service’s chief technology officer. Tiku was previously the chief operating officer of the gaming and entertainment company Entain.

The University of Canberra, in Australia, picked Craig Mutton to be the university’s chief digital officer. Mutton previously worked at Victoria’s Environmental Protection Authority.

EYE ON A.I. RESEARCH

When A.I. spills your secrets. Researchers from Google, the National University of Singapore, Yale-NUS College, and Oregon State University published a paper on the open-research site arXiv detailing how hackers can “poison” datasets used to train machine learning models. The problem can cause software to reveal sensitive information to criminals when given certain prompts.

Tech publication The Register covered the research paper and described how the researchers were able to show that “it was possible to extract credit card details from a language model by inserting a hidden sample into the data used to train the system.” 

From the paper:

In this paper, we connect these two lines of work and ask whether an adversary can exploit the ability to poison individual training samples in order to maximize the privacy leakage of other unknown training samples. In other words, can an adversary’s ability to “write” into the training dataset be exploited to arbitrarily “read” from other (private) entries in this dataset?

FORTUNE ON A.I.

Move over, Photoshop: OpenAI just revolutionized digital image making—By Jeremy Kahn

Google’s new ‘multisearch’ tool lets online shoppers browse for hot new items by searching both images and text—By Jonathan Vanian

Dallas airport uses robocops to enforce mask policy—By Chris Morris

Self-driving robotaxi caught on video speeding away from San Francisco police during a traffic stop—but Cruise says it was all part of the plan—By Massimo Marioni

BRAIN FOOD

Here come the autonomous 18-wheelers. Numerous companies are attempting to make a big business around self-driving trucks under the assumption that autonomous vehicles will be safer than human drivers and benefit the shipping and logistics industries, Fortune’s Tristan Bove reported. Despite the flood of capital, it will take years before the self-driving truck industry takes off. For instance, the A.I. that powers the trucks still needs to be improved so the vehicles can operate safely in extreme weather conditions. This is not an easy fix and will likely require more years of development.

From the article:
But use of self-driving trucks is still relatively limited to the Sun Belt states, as the technology isn’t so good in bad weather—such as intense snow and fog.

Infrastructure in certain parts of the country will also have to catch up. High-speed 5G internet connections let self-driving vehicles communicate wirelessly with traffic signals, roadside assistance, and with one another. But these networks are still missing in many parts of the U.S.

Our mission to make business better is fueled by readers like you. To enjoy unlimited access to our journalism, subscribe today.