Wednesday, 16 December 2009

Bruce Schneier on the future of privacy & security v. privacy, 4 December 2009


Open Rights Group: Bruce Schneier Security Talk (Q&A)
from Open Rights Group on Vimeo.

To techies, and probably some others, Bruce Schneier is a security god. Not only does he really know his stuff, but he has a gift for communicating with great clarity.

He gave an excellent talk for the UK Open Rights Group on Friday 4 December 2009, on 'The Future of Privacy: Rethinking Security Trade-offs'.

Much of what he said will be familiar to those who read his security blog. In fact it was very similar to a previous blog post by him on the future of privacy.

The video of the talk, above, was posted on the ORG blog (linked to on his own blog) - with a giveaway of 4 copies of his book to new ORG joiners, see the ORG blog post for details.

For those who prefer text to video, below are some other selective highlights from the talk. They're from my notes (and I've not re-watched the video), so they're not necessarily verbatim or indeed even in order, and of course any mistakes are mine alone.

Anecdotes and quotes

Some good quotes from the talk (slightly paraphrased):

In the US, data which it's illegal for the government to collect, they buy from corporations; data which corporations can't get, they get from the government for free.

Data is the pollution problem for the information society. All processes produce it and it stays around and festers and we have to deal with it somehow.

And this I think is true - third party individuals such as friends, not just businesses, may disclose your personal data:

The hard thing to deal with is data about you legitimately captured and then posted by someone else.

My favourite anecdotes from this talk:

Someone from the US Department of Homeland Security asked him, don't you want to know who's sitting next to you on the airplane?

He said: "No, I don't want to know who he is; I just want to know that he's not going to blow up the plane. And if he is going to blow up the plane, I don't care who he is either!"

In other words, it's not about identity, it's about intention. He thinks simply trying to map identity to intention makes little sense. (See also his blog post of his testimony about ID cards - others like the Economist think chipped cards are a particularly bad idea - and see his blog post about airline security generally.)

On the "I've got nothing to hide" argument (discussed further below):

On live radio someone raised the "I don't care about my privacy, I've nothing to hide" argument. His answer: "What's your salary?"

The questioner refused to reply…

More highlights

Data trails are inevitable; data storage & processing gets ever cheaper

We leave digital footprints everywhere. It's not a question of malice on the part of governments or businesses; data is just a natural byproduct of computer and communications technology, of the information society, of the things we do.

Computers including mobile phones create transaction records of everything we do. And our data, these records, have value, e.g. for marketing - not always lots, but some.

Data is also a byproduct of information society socialisation - the way we socialise now produces data; all socialisation systems that 20 years ago were face to face or by voice are now mediated by computers in such a way that they produce data. More and more corporations and IT systems are mediating and producing data.

In the information society, most of your data is not controlled by you; phone records, credit card transactions etc, all that data is about you but is not controlled by you.

Data storage costs are dropping to free, and similarly with data processing. It is becoming easier and cheaper to keep all the data rather than work out what to keep and what to throw away. Data collection, storage and processing costs are continuing to drop.

Surveillance is becoming wholesale, and privacy is deliberately de-valued

With data collected it's possible to do surveillance backwards in time, not just forwards. Data can be valuable later.

Data processing (including data mining) is cheap enough that it's possible to do wholesale surveillance, to follow every car, every one. All behavioural marketing is based on the notion of following everyone so that marketing can be targeted more effectively.

There's a lot of noise and error but for applications like marketing it's good enough to provide just a bit more edge over the competition. Less so for law enforcement.

We have these systems because the data is there. We have surveillance systems because it's useful to government and corporations.

People are seduced by convenience into making bad privacy trade-offs; they only think about privacy when it's salient to them. It's not that they don't understand the risks, but that they only focus on one thing at a time. Anti-privacy forces try to prey on these tendencies, they do their best to make privacy less salient, to make it a bit annoying to get to privacy settings.

(Note - see further this on privacy salience:

"sites appear to craft two different messages for two different populations. Most users care about privacy about privacy but don’t think about it in day-to-day life. Sites take care to avoid mentioning privacy to them, because even mentioning privacy positively will cause them to be more cautious about sharing data. This phenomenon is known as “privacy salience” and it makes sites tread very carefully around privacy, because users must be comfortable sharing data for the site to be fun." [And, I'd add, for the site to make money!])

Technology, the great disruptor

Cameras are everywhere but today we can still see them. 10 years from now they'll be too small to see.

Identification is only temporary; once automatic face recognition is working, there'll be no need for ID to know who you are.

Automatic voice recognition and keyword recognition are coming; will you be able to turn your cellphone microphone off, or find that it gets turned on remotely?

Once "life recorders" become common, it'll be thought suspicious not to wear them. Brain scan technology can tell if you've seen something before e.g. a terrorist training camp; the technology can only get better.

This era heralds the death of ephemeral conversation. Many people have been caught out by emails etc they thought were chat or had been deleted.

Laws & other infrastructure need care - they'll last for longer than we think

Laws written for one technology tend not to apply to later technologies, although they're trying to make laws technology invariant.

Decisions on privacy and anonymity made today based on the security concerns of the moment will become infrastructure, and we'll be stuck with them for 10 or 20 years.

Defaults matter

Watch the defaults (as Facebook has learned). People do defaults.

To make people have less privacy, just change the defaults, don't change what's possible. Microsoft has also learned about defaults; finally the defaults for Windows are good.

(Side note - All this is of course particularly relevant given the recent controversy about Facebook's recent new privacy practices and privacy defaults, which were then changed again in quick order after detractors pointed out that they claimed to be more privacy friendly while actually being the opposite - see e.g. comments by EFF, Light Blue Touchpaper, Epic, Reuters, Blogzilla, Out-Law, while Ben Laurie pointedly suggested the best option is not to use Facebook at all!

Out-Law have also noted a recent report by security experts Sophos who found many Facebook users readily accept friend requests from strangers, allowing the "friend" to access vast quantities of info about the user - which access rights are down to Facebook defaults.

Improvements in network analysis techniques will also help businesses like Facebook find out truly relevant connections between users, irrespective of defaults, which is again very valuable information, and could potentially include sensitive data like political beliefs:

"all networks share a remarkable property: their nodes can be classified into groups with the nodes connecting to each other depending on their group membership. In a social network, for example, people can be grouped by age, occupation, political orientation and so on. The method proceeds by averaging all possible groupings of the nodes, giving each grouping a weight that reflects its explanatory power.")

Security & privacy; privacy & openness

Security vs privacy is a false dichotomy. It makes no sense. Only identity-based security hurts privacy. Door locks, tall fences, those protect both security and privacy.

The real dichotomy is not security vs privacy, but liberty vs control. That explains why privacy and openness differ depending on who are are talking about.

Privacy tends to increase your power, openness tends to reduce it.

Between government and people, there is a power imbalance - government tends to have the power. (I.e. it's a question of balance of powers or separation of powers, of checks and balances, concepts essential to democracy.)

Open government reduces government power, and so is an equaliser. Privacy for the people increases the power of the people and decreases the power imbalance. Forced openness in people increases the imbalance. So we want government to be open, and individuals to be private.

Knowing each other's secrets doesn't work because of the power imbalance. If a policeman stops you and you show him your ID and he shows you his ID, it's not the same! If a doctor asks you to take off your clothes you can't say "You first". There is a fundamental asymmetry in those sorts of interactions, and privacy reduces or increases the asymmetry.

Real security is liberty plus privacy. Privacy is part of security. Individual privacy is part of what we want. Open government is part of what we want.

Some types of security do require us to give up our privacy, e.g. police investigations, sharing medical information, identification in a corporate setting.

In areas where security is opposed to privacy, where we are forced to violate privacy and give up power, we need oversight and auditing to maintain security (Quis custodiet custodes ipsos?). We willingly give power to the police to intrude into intimate aspects of our lives to solve crime, but there is a process in place to regulate it, a warrant system whereby a neutral third party determines probable cause and how to minimise the privacy invasion -and that by and large works. Similarly, data minimisation helps the police to do their jobs while preserving rights.

The "death of privacy" is over-rated. It is not inevitable. Just because the technology exists it doesn't follow that it will be used in such a way that it invades privacy. It's a balancing act. Technology changes the balance, but it doesn't make it go away.

We can accept the new balance, or deliberately reset it (e.g. the US video rental privacy laws - overly narrow, but at least it's an attempt to reset the balance).

He favours measures such as access to your own data and what is held about you, inhibiting third party data collection and use, limiting secondary use, erasing data, opt in not opt out, prosecuting criminal users of data.

What next? (including points from the discussion afterwards)

Defending privacy is very difficult because it's very contextual, it tends to be a social construct which by definition will involve a third party. Health data is private, but you know it because your doctor told you. We want our friends to see our data on Facebook, yet we want it to be private.

The "I've got nothing to hide" argument for giving up privacy is too glib. Privacy is part of dignity, it's a basic human need. (Also see the anecdote above, and his brilliant post on why privacy matters as a basic human need and deserves protection). A paper dealing specifically with the "I've got nothing to hide" fallacy was also mentioned - see my separate summary of Daniel Solove's excellent paper "'I've got nothing to hide' and other misunderstandings of privacy".

In the US, a common refrain is "Give up your privacy - or terrorists will kill you!". So of course people agree to give up their privacy.

What about education? (e.g. so people know to beware of the defaults). He thinks ritualising it through "education" and leaflets etc won't work, people need to learn it through their environment, awareness needs to be raised e.g. through scandals. (Although it was noted that the only scandal which seems to have registered with people, in the UK at least, was the child benefit CDs data losses involving the personal details of all families in the UK with a child under 16. Most other scandals seem to be considered par for the course now.)

The end result will be less privacy simply because computers naturally spit out data - unless we deliberately make them not do it.

We're not going to engineer our way out of this. Lots of good work has been done on privacy enhancing technologies (PETs) e.g. David Chaum's digital cash, but without economic incentives they will never be used.

We have to push for laws, a legal substrate that will encourage the use of PETs. It's like e-voting, good systems are not used because lesser systems are cheaper. (Note: standard e voting machines have been hacked.)

What will it take to change the laws? To change the laws properly, it will take a new generation of people who grew up with these technologies and understand what privacy means in the information age. That may not happen for 2 or 3 generations.

©WH. This work is licensed under a Creative Commons Attribution Non-Commercial Share-Alike England 2.0 Licence. Please attribute to WH, Tech and Law, and link to the original blog post page. Moral rights asserted.