CEO DailyCFO DailyBroadsheetData SheetTerm Sheet

After Roe, fears mount about A.I.’s ability to identify those seeking abortions

June 28, 2022, 4:13 PM UTC
Photo of abortion rights advocates holding banners during protests over the Supreme Court's decision to overturn Roe v. Wade.
After the U.S. Supreme Court overturned Roe v. Wade, privacy rights advocates warned women that digital apps and possibly A.I. technology could make it much easier for prosecutors in states that are poised to ban abortion to find women who are seeking to end a pregnancy.
Photo by David McNew/Getty Images

After the Supreme Court overturned Roe v. Wade last week, digital privacy advocates advised women to delete popular menstrual cycle tracking apps, turn off location sharing across a host of apps, use encrypted messaging apps, and switch to web browsers that don’t store search histories, such as Duck Duck Go. The fear is that with as many as half of U.S. states poised to criminalize abortion, prosecutors will start turning to such data to bring cases against those who have sought to end a pregnancy. Such data could also be used by private citizens hoping to collect bounties for tipping off the government to those violating anti-abortion laws or hoping to bring private lawsuits against women seeking abortions and those who have aided them, as laws in Texas and Oklahoma allow. There is a growing concern about the role A.I. could play in this digital dragnet.

Back in May, when news of the Roe decision first leaked, VICE’s tech outlet Motherboard found that a data broker called SafeGraph was selling mobile phone location information on people who visited abortion clinics, where they came from, how long they stayed, and where they traveled to afterwards. While there is no name directly associated with this location data, it is easy to track these mobile phones back to someone’s house and then use other public records to find out who lives there.

This kind of location data is so telling, it wouldn’t require A.I. to figure out that a particular person likely had an abortion. But one could imagine using A.I. data analytics to put together a “pattern” of subtler evidence, from location traces to social interactions, that might indicate someone likely had sought an abortion or gone to a pharmacy known to dispense abortion pills.

“The potential for this U.S. ruling to pave the way for AI-driven technologies to identify and track people seeking medical care, requires us to focus on building and deploying AI systems that protect the privacy of all individuals, including personal data such as search history and location,” Rebecca Finlay, the CEO of the non-profit advocacy group Partnership on AI, which is funded by several leading technology companies, said in a statement. PAI called on the international A.I. community “to double down on protecting user data privacy and human rights.”

A story in Politico raised the prospect that facial recognition technology, such as the software sold by Clearview AI, could be used to identify people seeking abortions. Such technology is already being used by a number of law enforcement agencies, while private individuals could use an app such as PimEyes, which finds matching photos of anyone on the Internet, to discover the identities of those entering abortion clinics.

Some of the most invasive data is held by apps that help women track their menstrual cycles. Law enforcement could subpoena this data to prove that someone was pregnant and that the pregnancy ended. In some cases, this data may even be available to purchase, National Public Radio reported. For example, The Wall Street Journal reported in 2019 that period-tracking app Flo was sending data to Facebook about when women were menstruating and was alerting the social media company of users who had told Flo that they intended to get pregnant. Those revelations lead to a settlement with the U.S. Fair Trade Commission in which Flo agreed to undergo an independent review of its privacy policies and obtain user permissions before sharing personal data. The company told NPR in a statement that it “firmly believes women’s health data should be held with the utmost privacy and care at all times, which is why we do not share health data with any third party.” The company also said an external, independent privacy audit in March had found “no gaps or weaknesses in our privacy practices.”

The Center for Democracy and Technology called on tech companies to “step up and play a crucial role in protecting women’s digital privacy and access to online information.” The Center called on companies that use algorithms to moderate content to “ensure that their content moderation policies, practices, and algorithms do not suppress access to information related to reproductive health.” It also said companies should “carefully scrutinize and seek to limit the scope of surveillance demands issued in prosecutions to enforce anti-abortion laws. They should adopt clear and consistent standards for refusing overbroad requests, commit to giving their users timely notice of requests, and report publicly the numbers of surveillance demands they receive to increase public accountability.”

But so far, few large tech companies have come out and issued such clear policies.

With that, here’s the rest of this week’s news in A.I.  

Jeremy Kahn
@jeremyakahn
jeremy.kahn@fortune.com

A.I. IN THE NEWS

European mayors say they were fooled by "deepfake" of Kyiv Mayor Klitschko. The mayors of Berlin, Vienna, Budapest, and Madrid, all took video conference calls with who they thought was Vitali Klitschko, the former professional heavyweight boxer who is currently mayor of Kyiv. But it turned out the calls were fake—with some suggesting that A.I.-powered deepfake technology had been used to dupe the mayors. Many other experts though said they thought these calls were not true deepfakes, but had used different, cruder video-mashup methods. It was also unclear if his voice was created by A.I. technology that can clone people's voices or if it was a human impersonator. The mayor of Vienna was wholly convinced by the fake, while the mayors of Berlin and Budapest eventually became suspicious and ended the calls. The mayor of Madrid realized it was a set up and raised the alarm straight away. You can read more from my Fortune colleague David Meyer here

Instagram will begin using A.I. to verify users' ages. The Meta owned social media app said it would begin using a service from a company called Yoti that uses A.I. to predict someone's age from their photograph, to help ensure that those using its service are over 18-years old. Yoti says it uses privacy-preserving methods to train its A.I. systems. Meta also said in a separate blog post that it was using new A.I. methods developed in-house to make sure teens were not accessing age-restricted content across its platforms. The A.I. takes in such data including when someone sets up an account, what kind of content they view, and how they interact with that content.

Amazon Alexa feature that can synthesize someone's voice raises ethical concerns. Amazon revealed an upcoming feature for its Alexa smart assistant that will allow the system to create a synthetic voice based on a few short recordings of someone's voice. Amazon’s senior vice president and head scientist for Alexa, Rohit Prasad, unveiled the feature at the company's re:Mars conference in Las Vegas last week, and showcased it with a video in which a child asks a story to be read in the voice of his dead grandmother. That caused some to say the feature was "creepy," at best, while others said they worried that the capability would present a huge security and identity theft risk—handing a powerful tool to criminals hoping to use fake voices in all manner of scams. You can read coverage from The Guardian here.

Cerebras sets record for largest A.I. system trained on a single chip. Cerebras Systems, a Sunnyvale, California, company that makes a giant computer chip—the size of a dinner plate—for A.I. applications, said last week that it had set the record for the largest A.I. model ever trained on a single device. The company said it trained a 20 billion parameter language model on its single large chip, which contains some 850,000 computing cores. Normally, such a large workload would have to be spread out over multiple graphics processing units in a datacenter, which is both expensive and involves a lot of engineering expertise to know how distribute the training workload simultaneously across so many chips. Cerebras says that by letting users train huge models on a single chip it is democratizing the use of such large models, so that even those without the engineering skills or money to train a massive model in a cloud-based datacenter can still do so. Tech publication VentureBeat has the story here.

EYE ON A.I. TALENT

Oren Etzioni, the long-time CEO of the Allen Institute for Artificial Intelligence in Seattle, is stepping down after nine years in the post effective September 30, GeekWire reported. Etzioni will continue to serve as a board member of the organization and will also take on a new position as technical director of the AI2 Incubator

Dataiku, an A.I. software company in New York City, has hired Bridget Shea to be its new chief customer officer, trade publication Enterprise Talk reported. Shea had previously been an advisor to the company and had been a chief customer officer at the software company Mural

EYE ON A.I. RESEARCH

Learning to learn, without externally-provided goals. A powerful way to train A.I. systems is to use reinforcement learning, where the software has to learn by trial-and-error—usually in a game or simulation—how to maximize some reward. But humans still usually have to specify the reward. Sometimes, when an A.I. system needs to learn a diverse range of skills in a complex, open environment, it may not be possible for humans to do this effectively. That's why researchers from Meta's A.I. research group have been looking at a ways to get an A.I. to learn without an externally-specified goal. They did so by creating three separate networks: one called a reachability network explores an environment randomly, and then learns the difference between any two possible sequence of actions. Another network, called goal memory, stores goals that the software has encountered during its random exploration, but only if they are sufficiently different from one another. The researchers say that by using this method they trained software to explore a maze and found that it could discover increasingly difficult goals. It also showed that a simulated robot arm could learn to push an object using this method, with no external supervision. You can read their research paper on the non-peer reviewed research repository arxiv.org.

FORTUNE ON A.I.

Chip makers are refusing to build new semiconductor plants in the U.S. unless Congress unlocks $52 billion in funding—by Eamon Barrett

Google’s suspended AI engineer corrects the record: He didn’t hire an attorney for the ‘sentient’ chatbot, he just made introductions — the bot hired the lawyer—by Colin Lodewick

Amazon’s plan for Alexa to mimic anyone’s voice raises fears it will be used for deepfakes and scams—by Sophie Mellor

The Amazon robots are here, and they don’t even have to be kept in cages anymore—by Christine Mui

BRAIN FOOD

China is increasingly using A.I. to monitor and police its population—often based on algorithms that are said to "predict" crime, protests, or other activity the authorities want to prevent. Is this vision of A.I. also a prediction of what the rest of us will increasingly face too? The New York Times documents China's increasingly sophisticated surveillance systems and how police compete to develop algorithms (often just simple, rule-based ones) to predict crime or other undesirable behavior, with a police response being triggered if the algorithm predicts trouble. As The Times journalists write: "While largely unproven, the new Chinese technologies, detailed in procurement and other documents reviewed by The New York Times, further extend the boundaries of social and political controls and integrate them ever deeper into people’s lives. At their most basic, they justify suffocating surveillance and violate privacy, while in the extreme they risk automating systemic discrimination and political repression." The story is an eye-opening and frightening read that calls to mind Minority Report. Let's hope other countries don't begin to copy this idea of policing based on prediction, not actual occurrences. 

Our mission to make business better is fueled by readers like you. To enjoy unlimited access to our journalism, subscribe today.