CEO DailyCFO DailyBroadsheetData SheetTerm Sheet

Not all A.I. surveillance is a bad thing

March 16, 2021, 5:34 PM UTC
People cleaning tar off a beach in Israel after an oil spill.
People clean tar from the sand of an Israeli beach after an oil spill off-shore caused the country's worst environmental disaster in years. Israeli company Windward used its A.I software to help identify the ship that was likely responsible for the spill.
Amir Levy—Getty Images

This is the web version of Eye on A.I., Fortune’s weekly newsletter covering artificial intelligence and business. To get it delivered weekly to your in-box, sign up here.

People are rightfully concerned about A.I.’s role in surveillance and the impact the technology may have on fundamental personal liberties. But there are some places that could use a bit more surveillance. The world’s oceans, for instance.

Even today, once a ship sails beyond the horizon, it enters a world of “radical freedom,” as the author William Langewiesche termed it in his 2005 book The Outlaw Sea. But the freedom that comes with being out of sight is easily abused: for smuggling, human trafficking, overfishing, piracy, and the dumping of garbage and toxic waste.

An example of that lack of accountability washed up on the beaches of Tel Aviv, Israel, last month: globs of tar, the remnants of large crude oil spill from a tanker. It was the worst environmental disaster to hit the country in years. But where had the oil come from? Which of the dozens of ships cruising the Eastern Mediterranean was responsible?

The Israeli government soon named a culprit: a Panamanian-flagged ship called The Emerald that was taking oil from Iran to Syria, in violation of international sanctions. How did the Israeli government single out The Emerald? In part, thanks to its use of an A.I.-enabled software platform created by an Israeli startup called Windward.

Windward’s software tracks every large vessel at sea across the planet and pairs that information with databases on ship ownership, vessel registrations and past journeys, all run through machine-learning models. The software can suss out suspicious behavior as well as identify ships with poor safety and maintenance records.

That has made Windward’s software increasingly popular not only with government agencies but with insurance companies and banks too. Its business received a boost in May last year when the U.S. government advised all financial institutions to tighten their due diligence of shipping for possible sanctions violations, an advisory followed two months later by a similar directive from the U.K. government. Earlier in the year, the United Nations had also advised financial firms to do more to crack down on ships North Korea was using to violate international sanctions too. “We built for this tool for governments, but now everyone needs to know this,” Ami Daniel, Windward’s CEO, says of his company’s software. He says demand has soared more than four fold in the past year.

Daniel is an Israeli Navy veteran (he was an officer aboard INS Hanit, an Israeli frigate damaged by an anti-ship missile fired by Hezbollah during the 2006 Lebanon War) who co-founded Windward in 2010. He tells me that several technological developments made its software possible: first, the advent of the Automated Identification System (AIS), a radio and satellite transceiver network through which all large vessels are required to continuously broadcast a unique identification number as well as their position, heading, speed, draft, destination port, and a variety of other data. Ship captains can turn AIS off, making it much harder to find and track their vessels, but that is against maritime regulations and, alone, often an indication of illicit activity.

Another technology helps Windward continue to monitor ships even without AIS: increasingly ubiquitous and commercially available satellite data. For instance, Windward has a partnership with another A.I.-enabled company, HawkEye 360, that has its own cluster of small satellites that pick up radio frequency information. Through this data, HawkEye 360 can identify and track a ship that has disabled its AIS. Finally, there’s the advent of machine learning techniques that enable Windward to find patterns in all this data.

Daniel says that because its data is used by banks and insurance companies to make decisions about loans and underwriting, the machine learning methods it uses have to be transparent. If Windward’s software identifies a ship as high risk, the person using the software needs to know why. To provide this transparency, Windward’s creates a risk score based on the results of different machine learning models that have learned to find behaviors known to be associated with illicit activity: loitering in a certain area, meeting other ships at sea, changing the ship’s name or registration, changes in ownership, ships sailing routes they’ve never sailed before, and many more. It also looks for patterns in the ownership structure of the vessels: does it have sister ships that have been involved in likely sanctions-busting or smuggling? Is the director of the company that owns the ship also a director at a company that has been caught violating sanctions? A user can click on a vessel and know immediately which of these behaviors has lead Windward to brand it as a high- or medium-risk ship.

In the case of The Emerald, there were a lot of red flags: Windward’s software pinpointed that the vessel was a 19-year-old oil tanker that had been purchased in December by a company called Emerald Marine Ltd., registered in the Marshall Islands, that only owned that single ship. “The vessel was acquired by Emerald Marine just before this journey, maybe specifically for this purpose,” Daniel says. It then traveled to the Northern Persian Gulf, at which point it had switched off its AIS. Daniel says it is likely The Emerald picked up Iranian crude oil there, either at Iran’s major oil terminal on Kharg Island or in a ship-to-ship transfer. It then reappeared in AIS records, traveling out of the Persian Gulf, up the Red Sea, and into the Eastern Mediterranean for the first time in eight years, another red flag. Off the coast of Israel, it again switched off its AIS, perhaps to elude possible interception by the Israeli Navy. It then journeyed to a spot off northeastern Cyprus where it met up with another oil tanker, most likely to transfer its cargo, Daniel says. That second tanker then proceeded to Baniyas, a port in Syria.

It was during the period the ship was off Israel’s coast that it likely discharged the oil that tarred the country’s beaches. Although Israel’s Environmental Protection Minister Gila Gamliel has accused Iran of deliberately spilling the oil in an act of eco-terrorism, Israel’s intelligence and defense agencies have distanced themselves from that claim. Daniel won’t comment on this other than to say its data shows The Emerald still seemed to have the vast majority of its cargo onboard when it rendezvoused with the other vessel off Cyprus.

There are plenty of examples like Windward of companies and people using A.I. to monitor illegal activity in remote corners of the globe: from tracking poachers preying on endangered species in national parks to monitoring illegal logging of rainforests to discovering unreported chemical spills and gas flaring.

With that, here’s the rest of this week’s A.I. news.

Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn

A story in last week’s newsletter incorrectly stated that a survey from KPMG had found retail executives were most likely to agree that A.I. was moving too fast to stay on top of and were also most likely to call for government regulation of the technology. While retail executives did rank highly on both counts, they were only more likely than the average respondent to worry about the pace of A.I. development and government regulation was more favored by those in manufacturing, with retail a close second.

A.I. IN THE NEWS

Hackers breach live feeds of 150,000 surveillance cameras including those at a Tesla warehouse, schools, hospitals and jails. Bloomberg News reported that the hackers gained access to the systems of Verkada, a San Mateo, Calif., company that makes software that allows users to access security cameras remotely via the Internet and is also developing automated analytics, using machine learning, for those videos. Tillie Kottmann, a member of the hacking collective that carried out the breach, told Bloomberg the group wanted to show people how pervasive video surveillance is and the ease with which such systems can be broken into. Kottmann's apartment in Lucerne, Switzerland, was later raided by Swiss police investigating the incident. Verkada said that it has "disabled all internal administrator accounts to prevent any unauthorized access,” and that its “internal security team and external security firm are investigating the scale and scope of this issue, and we have notified law enforcement.” Tesla said that “based on our current understanding, the cameras being hacked are only installed in one of our suppliers" and did not affect any of its factories, stores or service centers. The incident raises some scary implications about privacy and surveillance.

Group tells machine learning and A.I. grads not to accept Google jobs in continued fallout from the company's dismissal of prominent A.I. ethics researchers. A group of Google employees who have been advocating and organizing for change at the technology giant under the banner Google Walkout for Real Change wrote a blog post calling on other A.I. experts, "especially those who make their careers researching the social and ethical consequences of tech," to turn down job offers from the company, refuse to cooperate with its recruitment teams, bar the company from sponsoring academic machine learning conferences, and refuse to accept Google funding for their academic research departments or projects. The group said its call was in response to "abruptly firing, publicly disparaging, and gaslighting the co-lead of Google’s Ethical AI team, Dr. Timnit Gebru" and the firing of the team's other co-lead, Margaret Mitchell. "This pattern makes clear that Google is working intentionally to punish, silence, and dismantle the entire Ethical AI team," the group wrote. "This is a watershed moment for the tech industry, with implications that reach far beyond it."

A Pennsylvania woman allegedly used deepfakes to try to force her daughters' rivals off a high school cheerleading squad. A woman from Bucks County, Pennsylvania, was arrested and charged with cyber harassment of a child and related offenses after allegedly sending fake photographs and videos to cheerleading coaches depicting some members of the squad naked, smoking and drinking in an effort to get them kicked off the team and advance her own daughter's position, according to a story in The Philadelphia Inquirer. She had earlier been accused of sending some of the same girls harassing text messages, including doctored photos, along with messages urging the girls to kill themselves. Police said they determined the woman had used social media images of the girls and deepfake software to create the fraudulent photos.

Facebook launches a new project to get A.I. to understand video and says it has already been used to improve video recommendations on Instagram. The social media giant has begun an ambitious project to teach an A.I. system to "understand" (it might be more technically accurate to say "better categorize") videos. The software must teach itself to cluster videos into groups based on both their visual and audio content. A system taught in this way has already been used to improve which "Reels" videos Instagram recommends users watch. It has also led to a big improvement in speech recognition that could lead to better automatic video captioning. In the future, Facebook thinks such systems will underpin a more intuitive way to search for digital "memories" (photos and videos of past events) by asking questions such as "show me every time we sang to Grandma." You can read my coverage of the development here

The British military plans to ramp up its use of A.I. The U.K. armed forces will use A.I. for better intelligence, to predict the actions of adversaries on the battlefield, to perform reconnaissance, and to relay information from the battlefield in real-time to commanders, according to a story in The Financial Times.

The news section of last week's Eye on A.I. included an item about a story in tech publication The Verge that alleged Pedro Domingos, an emeritus computer science professor, and Michael Lissack, a financial whistleblower, had engaged in a campaign of online harassment against A.I. ethics researcher Timnit Gebru following her ouster from Google. Domingos and Lissack both deny these allegations."My entire interaction with Gebru consisted of a modest number of tweets, all of which are public, and none of which can be construed as harassment on my part by any reasonable person," Domingos told Fortune. Lissack told us, "On Twitter, somehow expressing disagreement was perceived by Gebru as 'harassment.' It should be viewed as an opportunity for learning and dialog, an opportunity she rejected." 

EYE ON A.I. TALENT

Software giant Microsoft has named Robin Sutara as chief data officer for Microsoft UK, according to a story in Information Age. Sutara was most recently chief of staff for Azure Data Engineering at the company.

AnyVision, a New York-based company that provides computer vision solutions, has hired Gilad Brand to be its chief product officerthe company said in a statement. He had previously been senior director of product management at Salesforce in Israel, according to his LinkedIn profile

Cloud data management comapny Rubrik, which is based in Palo Alto, California, has appointed Ajay Sabhlok as its chief information officer and chief data officer, the company said in a release. He was previously vice president and head of IT at the company. 

EYE ON A.I. RESEARCH

A new machine learning system is figuring out which DNA sequences are active within cells. While human cells contain an entire human genome's worth of DNA, cells only ever use a small portion of that instruction set, depending on their function. Knowing which DNA segments are active in a cell type is potentially critical for discovering new therapies for a range of diseases. But the most often used method for figuring this out, called ATAC-seq, requires tens of thousands of cells in order to discern a clear signal about which DNA segment is active. That means the method can be difficult to use, especially for harder-to-obtain cell types such as some stem cells.

But now scientists at Harvard have teamed up with A.I. researchers from computer chip maker Nvidia to produce a machine learning system called AtacWorks that is, according an Nvidia blog post, able to take ATAC-seq data but achieve the same results with just tens of cells that would typically take tens of thousands of cells. This enables "scientists to learn more about the sequences active in rare cell types, and to identify mutations that make people more vulnerable to diseases," the blog post said. The research was published in Nature Communications. In the research paper, the scientists wrote, "With a sample set of just 50 cells, the team was able to use AtacWorks to identify distinct regions of DNA associated with cells that develop into white blood cells, and separate sequences that correlate with red blood cells."

"Looking at accessible regions of DNA could help medical researchers identify specific mutations or biomarkers that make people more vulnerable to conditions including Alzheimer’s, heart disease or cancers. This knowledge could also inform drug discovery by giving researchers a better understanding of the mechanisms of disease," Nvidia said.

FORTUNE ON A.I.

Israeli startup raises $18.5 million to train A.I. with fake data—by Jeremy Kahn

Commentary: Why Merrick Garland needs to rethink the Google antitrust case —by Christopher Koopman and Caden Rosenbaum

Meet the computer that must survive ‘the shake, rattle, and roll’ of a space launch—by Jackie Snow

Facebook reveals A.I. that is already improving Instagram video recommendations—by Jeremy Kahn

BRAIN FOOD

What's the point of a company's Responsible A.I. team? That's a key question raised by a magazine story by M.I.T. Technology Review A.I. reporter Karen Hao that was published last week.

The story profiles Joaquin Quiñonero Candela, a veteran A.I.researcher at Facebook, and details the efforts of the Responsible AI team he currently leads at the company. It explores how Candela, who helped build Facebook's news feed algorithms earlier in his career, was tasked with researching "the societal impacts" of A.I., including the company's own algorithms. In the end, the team never managed to address what many outside Facebook regard as the biggest societal problems with Facebook's A.I. algorithms: the way in which Facebook's newsfeed, optimized for user engagement, can accelerate political polarization, lead people down pathways towards extremism, and spread misinformation and hate speech.

Instead of working on these topics, as Hao tells it, Candela's team got pigeonholed into working mostly on ways to combat bias—and in particular, they became involved in the company's efforts to fend off claims that its algorithms were biased against conservative political content. That was a threat with potentially existential ramifications for the company in 2018, when the initiative started, as it faced a hostile President Trump and Republican-dominated Congress. And A.I. bias is not unimportant: in the U.S., Facebook has a legal obligation not to discriminate against people on the basis of protected attributes, such as race or religion. (In fact, as Hao notes, the company has gotten in trouble for actions such as not showing certain real estate ads to Black people.) But it's not the issue that animates most of those alarmed by Facebook's power and impact on society. 

Facebook has challenged Hao's piece, with a number of Facebook executives tweeting that they feel the story misrepresents the company's efforts to tackle harmful content such as misinformation and hate speech. These, the executives have said, are "integrity" issues that are handled by a number of different teams at Facebook. Many of these teams are using cutting-edge A.I. to try to screen out content that violates Facebook's terms of service, or at least flag it to human moderators who can review it.

But as Hao's piece makes clear, the algorithm powering Facebook's newsfeed sit at the core of many of these problems: if a piece of misinformation or hate speech gets through the A.I. dragnet and is missed by the human moderators, there's a good chance Facebook's algorithms will help it go viral, amplifying its impact. Hao details several instances when engineers and researchers at Facebook proposed changes that would have altered this equation, but these proposals failed to gain traction within the company because they might have dented the company's growth. And she presses Candela on whether his team should have investigated the way Facebook's algorithms accelerated polarization or fed extremism, rather than focused on bias.

The story is well worth a read, even if you ultimately disagree with its conclusions. While some of these issues are unique to Facebook, the question of what it means for a corporation to have an "responsible A.I" team or an A.I. ethics officer or A.I. ethics team is not. Are these teams primarily to serve as public relations tools, part of a company's corporate social responsibility efforts? Or are they meant to ask questions that might cut to the core of a company's strategy and business model? Should they be seen as a kind of inspector general or auditing department for a company's own A.I. algorithms? What power should they be given to challenge and change the status quo?