Home > News > Contact tracing apps can help stop coronavirus. But they can hurt privacy.
149 views 8 min 0 Comment

Contact tracing apps can help stop coronavirus. But they can hurt privacy.

Governments now face trust issues and trade-offs

- April 28, 2020

Governments around the world are busy talking about the critical next steps — how to keep people safe from the coronavirus as economies start to reopen. In the United States and in Europe, this involves officials looking at how to set up new tools for “contact tracing.”

The ability to identify anyone who has come into contact with an infected person, most health experts agree, is an important part of how countries will prevent new infection hot spots from arising. China, Singapore, South Korea and nearly two dozen other countries are now using mobile phone data to trace contacts.

However, any government database of individuals’ movements or encounters would pose major privacy challenges. In response, there has been a proliferation of attempts to design “privacy-preserving” contact tracing apps, including a new initiative from Apple and Google. This has reignited debates over the trade-off between privacy and security.

The coronavirus is expanding the surveillance state. How will this play out?

Here’s how some researchers propose to reduce the trade-off — and the possible limitations they face.

New technologies can help preserve privacy

While East Asian countries have used contact-tracing apps and other digital tools for months, concerns about privacy have loomed large in Western policy discussions. The debate follows previous controversies around the gathering and use of data, such as the NSA surveillance programs revealed by Edward Snowden, or Cambridge Analytica’s unauthorized use of Facebook data.

Meanwhile, against the backdrop of these controversies, computer scientists have made quiet progress in developing “privacy-preserving technologies.” Many are hoping to carve out a new path forward, discovering clever ways to avoid privacy abuses yet preserve the benefits of information technology — one analogy is the way in which sniffer dogs enable officials to search for bombs, while protecting individual privacy much better than police officers rifling through bags.

At our research institute, we use the term “structured transparency” to describe the opportunity to achieve both high levels of privacy and effectiveness through the careful design of information architectures — the social and technical arrangements that determine who can see what, when and how.

Four and a half reasons not to worry that Cambridge Analytica skewed the 2016 election

A focus on structured transparency is evident in the recent joint proposal by Google and Apple for smartphone-based contact-tracing, as well as in many other initiatives. The proposed system is voluntary and largely decentralized.

Here’s how it works: Each participant’s phone would use Bluetooth to detect other nearby phones and keep a personal record of all encounters. Instead of personally identifying its owner, each phone would use a temporary pseudonym. When two phones come within range of each other, they would simply share their current pseudonyms, like people exchanging business cards.

This would allow contact tracing without any centralized surveillance. If a person is diagnosed as having been infected, their phone would upload a list of all the recent pseudonyms it has used to a database. This would allow other users to find out that they had been in contact with an infected person, by comparing the “business cards” listed on the database with the “business cards” stored on their phone. All that anyone with access to the database could see would be an uninformative list of pseudonyms.

Researchers at organizations such as OpenMined have generated lists of other proposals for combating the coronavirus in a privacy-preserving fashion. For instance, “differential privacy” techniques could help authorities monitor public compliance with lockdowns while buttressing the anonymity of individual-level data. This would work by adding random noise to data sets, thereby obscuring individual data points (such as an individual’s level of movement) without changing aggregate statistics (such as the average level of movement in a city) that authorities might reasonably want to know.

Trade-offs between privacy and security will remain

These proposals have drawn skepticism as well as praise. Some critics worry that the proposed techniques may be less protective than they initially appear. A common theme in the security literature is the difficulty, and sometimes practical impossibility, of providing privacy guarantees.

The Trump administration wants to be able to break into your encrypted data. Here’s what you need to know.

Others fear that features intended to prioritize privacy may make monitoring ineffective. Ross Anderson, a leading security researcher, worries that not enough people will opt in to voluntary contact-tracing apps, that these apps will likely throw up too many false positives and that decentralized systems are difficult to update.

And even if tech experts can make contact-tracing apps both privacy-preserving and effective, there will still be privacy-security trade-offs in some aspects of pandemic response. For instance, various East Asian countries have begun to use GPS tracking to monitor individual compliance with home quarantine policies. No amount of technical cleverness is likely to fully resolve the privacy concerns posed by an app that sends police officers to your door.

Countries with greater trust in their government will be better able to manage these remaining privacy-security trade-offs and concerns. Technology can help minimize the trade-offs, but it cannot eliminate them. Clashes over these issues will continue as policymakers work to safely reopen economies.

Editor’s note: this article has been updated.

The TMC newsletter is changing shape! Sign up here to keep receiving our smart analysis.

Toby Shevlane (@TShevlane) is a PhD student at the University of Oxford’s Faculty of Law, and a researcher at the Center for the Governance of AI, Future of Humanity Institute.

Ben Garfinkel is a PhD student in international relations, University of Oxford, and research fellow at the Center for the Governance of AI, Future of Humanity Institute.

Allan Dafoe is associate professor in the International Politics of AI, University of Oxford, and director of the Center for the Governance of AI, Future of Humanity Institute.