For years, companies have operated under the assumption that in order to improve their artificial intelligence software and gain a competitive advantage, they must gather enormous amounts of user data—the lifeblood of machine learning.
But increasingly, collecting massive amounts of user information can be a major risk. Laws like Europe’s General Data Protection Regulation, or GDPR, and California’s new privacy rules now impose heavy fines for companies that mishandle that data, such as failing to safeguard corporate IT systems from hackers.
Some businesses are now even publicly distancing themselves from what used to be a standard practice, such as using machine learning to predict customer behavior. Alex Spinelli, the chief technologist for business software maker LivePerson, recently told Fortune, that he has cancelled some A.I. projects at his current company and at previous employers because those undertakings conflicted with his own ethical beliefs about data privacy.
For Aza Raskin, the co-founder and program advisor for the Center for Humane Technology non-profit, technology—and by extension A.I.—is experiencing a moment akin to climate change.
Raskin, whose father, Jef Raskin, helped Apple develop its first Macintosh computers, said that for years researchers studied different environmental phenomena like the depletion of the ozone layer and rising sea levels. It took years before these different environmental issues coalesced into what we now call climate change, a catch-all term that helps people understand the world’s current crisis.
In the same way, researchers have been studying some of A.I.’s unintended consequences related to the proliferation of misinformation and surveillance. The pervasiveness of these problems, like Facebook allowing disinformation to spread on its service or the Chinese government’s use of A.I. to track Uighurs, could be leading to a societal reckoning over A.I.-powered technology.
“Even five years ago, if you stood up and said, ‘Hey social media is driving us to increase polarization and civil war,’ people would eye roll and call you a Luddite,” Raskin said. But with the recent U.S. Capitol riots, led by people who believed conspiracy theories shared on social media, it’s becoming harder to ignore the problems of A.I. and related technology, he said.
Raskin, who is also a member of the World Economic Forum’s Global A.I. Council, hopes that governments will create regulations that spell out how businesses can use A.I. ethically.
“We need government protections so we don’t have unfettered capitalism pointing at the human soul,” he said.
He believes that companies that take data privacy seriously will have a “strategic advantage” over others as more A.I. problems emerge, which could result in financial penalties or damaged reputations.
Companies should expand their existing risk assessments—which help businesses measure the legal, political, and strategic risks associated with certain corporate practices—to include technology and A.I., Raskin said.
The recent Capitol riots underscore how technology can lead to societal problems, which in the long run can hurt a company’s ability to succeed. (After all, it can be difficult to run a successful business during a civil war.)
“If you don’t have a healthy society, you can’t have successful business,” Raskin said.
Jonathan Vanian
@JonathanVanian
jonathan.vanian@fortune.com
A.I. IN THE NEWS
Arm wrestling. Graphcore, a Microsoft-backed startup specializing in A.I. computer chips, is objecting to Nvidia’s proposed $40 billion purchase of semiconductor licensing firm Arm Holdings, CNBC reported. The article quoted Hermann Hauser, whose firm Amadeus Capital invests in Graphcore, as saying, “If Nvidia can merge the Arm and Nvidia designs in the same software then that locks out companies like Graphcore from entering the seller market and entering a close relationship with Arm.” A Nvidia spokesperson said, however, that the deal is “pro-competitive.”
Don’t scrape faces in Canada. The Canadian government has deemed the facial-recognition software sold by Clearview AI as illegal and wants the startup to delete photos of Canadian citizens from its database of human faces, The New York Times reported. Commissioner Daniel Therrien said that Clearview AI allows for “mass surveillance” and puts society “continually in a police lineup.” Clearview AI objects to the determination and a corporate lawyer for the company said the startup “only collects public information from the Internet which is explicitly permitted,” the report said.
Sloppy data in healthcare A.I. An investigation by health news service STAT found that the Federal Drug Administration has cleared over 160 medical A.I. products “based on widely divergent amounts of clinical data and without requiring manufacturers to publicly document testing on patients of different genders, races, and geographies.” Regarding ten A.I. products used for breast imaging, the report found that “only one publicly disclosed the racial demographics of the dataset used to detect suspicious lesions and assess cancer risk.”
Big money in big data. The startup Databricks said it closed a $1 billion funding round and now has a private valuation of $28 billion, VentureBeat reported. What’s noteworthy about the funding round: Cloud computing rivals Amazon,Microsoft, and Google all participated, underscoring the startup’s popularity with companies using its technology across multiple cloud computing vendors.
EYE ON A.I. TALENT
Bowery Farming has hired Injong Rhee to be the indoor farming startup’s chief technology officer. Fortune’s Aaron Pressman reported on the hiring, explaining that Rhee, who worked at Google and Samsung, “will focus on improving Bowery’s computer-vision system and other sensors that analyze when plants need water and nutrients, while also looking to apply the company’s accumulated historical data to new problems.”
EYE ON A.I. RESEARCH
How A.I. can predict COVID-19 mortality. Researchers from institutions like Massachusetts General Hospital, Harvard Medical School, and University of Sydney published a paper in Nature about using machine learning to predict the most likely predictors of COVID-19 mortality from electronic health records. The researchers found that age was “the most important predictor of mortality in COVID-19 patients,” with other factors including a history of pneumonia and diabetes as other important risk factors.
The Boston Globe reported on the research and discussed its importance with one of the paper’s co-authors:
“If we can predict [mortality] so well, based off of all these features that happen before individuals even get sick, this can really be applied in ways that I think are novel for an algorithm like this,” said Dr. Zachary Strasser, one of the study’s lead researchers, along with Hossein Estiri, an assistant professor of medicine at MGH and Harvard. “We can really think about who needs to get prioritized for limited resources, because these are the people that are probably going to do worse.”
FORTUNE ON A.I.
IBM unveils ambitious plan for quantum computing software—By Jeremy Kahn
Who is Amazon’s new CEO, Andy Jassy?—By Jonathan Vanian
TikTok takes on the mess that is misinformation—By Danielle Abril
Nvidia says its $40 billion Arm takeover is ‘proceeding as planned’ despite antitrust regulator pile-on—By David Meyer
Chinese short-video app Kuaishou jumps nearly 200% in its Hong Kong debut—By Naomi Xu Elegant
How mental-health crisis centers have tried to weather the COVID-19 storm—By Jonathan Vanian
BRAIN FOOD
Context matters. To prevent A.I.-powered language systems from spewing offensive words that aren’t appropriate for work, researchers use a list known as LDNOOBW, or a List of Dirty, Naughty, Obscene, and Otherwise Bad Words. In theory, this list of bad words acts as a guidepost for A.I. language systems to avoid offending people. But, as Wired reports, A.I. systems that have incorporated this list have ended up producing unintended consequences. In one case, a chat software called Rocket.Chat censored “attendees of an event called Queer in AI from using the word queer.”
From the article:
Words on the list are many times used in very offensive ways but they can also be appropriate depending on context and your identity,” says William Agnew, a machine learning researcher at the University of Washington. He is a cofounder of the community group Queer in AI, whose web pages on encouraging diversity in the field would likely be excluded from Google’s AI primer for using the word sex on pages about improving diversity in the AI workforce. LDNOOBW appears to reflect historical patterns of disapproval of homosexual relationships, Agnew says, with entries including “gay sex” and “homoerotic.”