Tuesday, 5 January 2010

Privacy enhancing technologies (PETs), privacy laws - and compliance & enforcement

The interesting and lively recent talk by Bruce Schneier and discussion on the future of privacy triggered some thoughts on a number of the issues raised, notably suggestions that privacy enhancing technologies should be mandated (e.g. someone made the point that we need a legal substrate that will encourage the use of PETs).

Personally, I believe that the principles set out in the EU Data Protection Directive by and large already cover most of what is needed to protect privacy, at least in the EU.

To me, the biggest issue is that compliance needs to be monitored and policed, and privacy breaches detected and punished, adequately - but at the moment, that's simply not the case.

For that to be possible, national privacy commissioners and data protection authorities must be given sufficient power and resources (in terms of technical expertise as well as funding and people). Which requires political commitment and funding - and therein lies the rub, at least with the current UK government.

As Bruce Schneier and many others have pointed out, our personal data has value - to governments, to businesses, for different reasons.

Moving to privacy enhancing technologies (PETs) and privacy preserving practices & processes would certainly help organisations to comply with the laws we already have - not exceed them, note, just comply with them properly - but doing that costs money, and deprives them of data which they think (probably rightly) would give them an edge, whether in crime policing or commercial competitiveness terms. (A key policy question of course is, are the benefits of a surveillance society and the like worth the costs to privacy?)

The Effect of Online Privacy Information on Purchasing Behavior: An Experimental Study in 2007 by Janice Tsai, Serge Egelman, Lorrie Cranor, Alessandro Acquisti of Carnegie Mellon University, seemed to suggest that consumers might be willing to pay more per item to protect their private personal data, based on a study of people who were given money and asked to buy certain items online using the P3P search engine PrivacyFinder.org (which ranks search results based on sites' privacy policies).

However, I suspect that in fact people generally aren't willing to pay more for better privacy, based very scientifically on my gut feeling that people who spend money out of their own pockets might well behave differently from those in the study, and also based on the lack of success that privacy-friendly search engines have had in displacing the likes of Google - and that includes search engines like Ixquick which don't record users' data.

So, unless and until they absolutely have to, rationally speaking why would businesses (especially in a recession) fork out hard cash to restrict their collection, storage or use of data, when the relevant authorities mostly haven't the money or technical means to monitor or audit their compliance to catch breaches, let alone punish them befittingly?

A "You now have a notice issued against you, take that!" slap on the wrist or a relatively small fine is probably well worth the risk for most organisations, if you look at it in cold financial cost/benefit terms, given how rarely it is that breaches are actually detected by the authorities. In addition, deterrents are inconsistent across the EU - the exact same data protection breach which could garner jail time in one EU member state gets only a wag of the finger in another.

In a Guardian article, UK Information Commissioner Christopher Graham was quoted as saying, in relation to journalists who used private investigators to illegally obtain personal information:

"Any fines you get from magistrates can be written off as a business expense … We were let down by the courts, who didn't seem to be interested in levying even the pathetic fines they had at their disposal; we were rather let down by parliament in the end, with no legislation; and we were let done by the newspaper groups, which didn't take it seriously."

Yes, I know the UK ICO are trying to persuade enterprise that there's a good business case for implementing privacy protection in their IT systems and businesses processes, I know that the Coroners & Justice Act 2009 will be beefing up the ICO's powers (though that's not yet in force) and that the UK have recently closed a consultation on increasing the monetary penalties for data protection breaches. And I know that the EU Telecoms Reform package will require mandatory notification of data breaches by telecoms businesses.

But until people start being willing to pay more for better privacy, and/or compliance with data protection laws and regulations can be properly policed and enforced (with real teeth), I am cynical about the extent to which those measures will make a real difference. Even the monetary penalties being consulted on in the UK have a ceiling of £500,000, which may not be much for a large multinational concern.

More to the point, breaches have to be detected before they can be punished - but how is that going to done reliably, when the infrastructure doesn't support proper regulatory monitoring and auditing?

In terms of concrete steps, from a personal privacy viewpoint it seems to me that the following is what really needs to happen (as a minimum), if the goal is to incentivise the adoption of PETs and similar technologies:

  1. EU data protection laws need to be given teeth that are sharp enough to draw real blood, and relevant authorities need to be given the resources for proper policing. The laws ought to be enforced, and seen to be enforced.

    One big issue is harmonising the minimum penalties for breach of data protection laws, so the same breach is treated at least as severely in one member state as another, to have a real deterrent effect.

  2. EU laws are relatively privacy protection friendly in substance, but probably could be beefed up to explicitly require:
    1. Stronger powers and more resources for authorities to audit and monitor compliance - and that should include "technical means" for auditing, a phrase often used in EU legislation. This should involve at the very least the use of appropriate technical standards for information assurance (not developed fully yet, I know, but see e.g. ISTPA's work on a privacy management reference model). This is indeed moving in the direction of the "accountability and transparency" camp (for more on which see e.g. the Galway report).
    2. Default settings to be privacy preserving (rather than the opposite, which is far more common in practice) - given that "people do defaults".
    3. Consents to be real and fully informed - which is related to privacy salience in some respects. See the fascinating chapter about "engineered consent" in the Identity Trails book, and the work being done by EnCore.

For a detailed technological and indeed general analysis do read Robin Wilton's September 2009 paper "What's Happened to PETs?", which I confess I've only just got around to reading (I wrote the above before reading it).

©WH. This work is licensed under a Creative Commons Attribution Non-Commercial Share-Alike England 2.0 Licence. Please attribute to WH, Tech and Law, and link to the original blog post page. Moral rights asserted.