Why you can trust the new coronavirus contact-tracing apps to safeguard your privacy

A major tool in controlling any pandemic is what’s known as contact tracing. Broadly, contact tracing means letting everyone who has been near an infected patient know that they themselves should get tested and self-isolate. Normally it’s a labor-intensive task, but a coming wave of contact-tracing apps for mobile devices could make the process far more efficient, fast, and widespread.

There’s just one problem. The public has seen many times over the past decade that they shouldn’t trust digital tools with their information, especially sensitive information such as health data. That might make people hesitant to use the new contact-tracing apps, but to be most effective, the apps have to be used by a critical mass of the population in an area. Researchers at Oxford University estimate that 60% of a population using the apps would have a major impact.

Luckily, the new contact-tracing apps are designed to protect user information in ways companies like Facebook and Equifax have failed to do. The apps are based on a new anonymous, largely decentralized data protocol created by a historic collaboration between Apple and Google. Among other features, the system allows iOS and Android devices to interact in ways they usually can’t.

The first of the new apps using the system arrived in the U.S. on Wednesday, when Virginia released its COVIDWISE contact-tracing app. Similar apps are expected to follow from other state health authorities, and all of them will be able to share data using the Apple-Google system.

But why should anyone trust them?

The short answer: Apps like COVIDWISE simply don’t collect any data about you. They don’t track or record location data from phones or personally identifying data about users. Instead, the apps detect contact with an infected person through a neat trick combining short-range Bluetooth signals and an anonymous data protocol that stores almost everything on users’ devices.

When two devices with a contact-tracing app are near each other, they’ll automatically exchange digital “keys,” random and unique strings of numbers, over Bluetooth. The digital keys for a device are changed randomly every 10 to 20 minutes, and each device keeps a two-week record of every key it has encountered.

Storing these contact records on individual devices, instead of a centralized server, makes the data much harder to steal. And because the key codes don’t include any identifying information, even a hacker who somehow stole the data directly from a phone wouldn’t come away with anything particularly useful.

“One analogy that’s good is, if you lose a piece of paper with your bank account on it, it’s not that bad,” says Tina White, executive director of Stanford’s COVID Watch program, which developed some of the ideas used by the system. “That’s how you should feel about this—it’s just random numbers.”

But if someone tests positive for COVID, those anonymous random numbers become crucial to warning others of possible exposure.

A COVIDWISE user who tests positive for the coronavirus can report it in the app. To prevent false reports, this can only be done using a code provided by Virginia’s Department of Public Health. Like every element of the system, the self-reporting is entirely voluntary and doesn’t send any personal information to COVIDWISE.

Instead, after a report is made, the app uploads the infected user’s anonymous device codes over the prior 14 days to a health agency server—the only centralized element of the system. Every COVIDWISE app checks these coded results periodically, and if there are any matches with the anonymous contact records on the device, it notifies the user that they may have been exposed. No information is shared about the location or time of the exposure, or about the individual who tested positive—just that an exposure may have occurred.

Further, no information about who has been exposed, or even where exposures happened, is transmitted to the health agencies that oversee the apps. That means it’s up to individual users to respond to their exposure, such as by getting tested themselves.

It’s important to note that the Apple-Google data system can be used only by apps developed or approved by state health authorities. There are competing systems that may work differently and could have fewer privacy protections.

It’s also too soon to conclusively say apps using the Apple-Google system can’t be somehow exploited—hackers are endlessly inventive. But the unusual structure of the new apps seems to at least limit those risks, while helping control a disease whose costs are already too high.

Subscribe to Well Adjusted, our newsletter full of simple strategies to work smarter and live better, from the Fortune Well team. Sign up today.

Read More

Artificial IntelligenceCryptocurrencyMetaverseCybersecurityTech Forward