Sunday, 28 February 2010

Art. 29 working party's priorities 2010-2011

The EU article 29 working party on data protection and privacy have published their work programme for 2010 to 2011.

The strategic themes they will focus on are -

  1. implementation of the Data Protection Directive and working on a future comprehensive legal framework (as to which see the article 29 working party's paper on the Future of Privacy)
  2. globalisation
  3. technological developments
  4. enhancing the effectiveness of the art. 29 WP and national data protection authorities e.g. investigation and enforcement including harmonisation of DPA powers and co-operation
  5. topical issues

- based on which they're working in particular on:

  • interpretation -
  • implementation of the revised e-Privacy Directive
  • considering the impact of the Lisbon Treaty
  • Binding Corporate Rules, safe harbor, adequacy of third countries' regimes
  • international standards work e.g. ISO, the Madrid Declaration, OECD guidelines review
  • cloud computing
  • profiling and behavioural advertising
  • search engines and the "right to be forgotten"
  • social networking sites
  • RFID privacy impact assessments
  • financial matters
  • traveller data
  • updating WP80 on biometrics
  • possibly updating WP73 on eGovernment and ID management

They say they're available for requests for opinions by the European Commission and others notably on privacy by design, accountability, strengthening role of data subjects and other areas discussed in their Future of Privacy paper.

©WH. This work is licensed under a Creative Commons Attribution Non-Commercial Share-Alike England 2.0 Licence. Please attribute to WH, Tech and Law, and link to the original blog post page. Moral rights asserted.

Internet privacy - New York Times article

Good article in the New York Times on online privacy, taking the view (as I've always thought) that privacy notices are meaningless and quoting US Commerce Department official Daniel J. Weitzner as saying “There are essentially no defenders anymore of the pure notice-and-choice model".

While updating laws or regulations may be (part of) the solution, the article points out some potentially helpful tools being developed, e.g.:

  • real-time "privacy nudges" like onscreen alerts reminding users of privacy implications before they disclose certain info like birthdates online, and
  • interesting research showing people pay more attention or are more likely to respond to anthropomorphic (human-like) images, e.g. pictures of eyes instead of flowers

as well as anonymous "incognito" browsing as standard.

©WH. This work is licensed under a Creative Commons Attribution Non-Commercial Share-Alike England 2.0 Licence. Please attribute to WH, Tech and Law, and link to the original blog post page. Moral rights asserted.

Saturday, 27 February 2010

"Data controller", "Data processor" - art. 29 WP opinion issued

The EU Article 29 Working Party have issued their Opinion 1/2010 on the concepts of "controller" and "processor" under the EU Data Protection Directive.

The concepts of "data controller" and "data processor" are very tricky concepts which can trip people up or get in the way, so the opinion is very well worth reading.

I've only just seen it and wanted to post this heads up, I've not read it yet myself.

©WH. This work is licensed under a Creative Commons Attribution Non-Commercial Share-Alike England 2.0 Licence. Please attribute to WH, Tech and Law, and link to the original blog post page. Moral rights asserted.

Google privacy & opt-out

Brilliant spoof video from satirical site The Onion, particularly funny after the Google Buzz privacy debacle -

Google Opt Out Feature Lets Users Protect Privacy By Moving To Remote Village

©WH. This work is licensed under a Creative Commons Attribution Non-Commercial Share-Alike England 2.0 Licence. Please attribute to WH, Tech and Law, and link to the original blog post page. Moral rights asserted.

Friday, 26 February 2010

Genders that get ahead in law… or get a date…

Queen's Counsel appointments are out, i.e. very senior court lawyers for any non-Brits readers, but the percentage who are women (15.5%) is pretty much the same as it was 10 years ago (list of successful applicants, stats by percentage only), in fact less in percentage terms than in 1998, which is disappointing.

I converted the figures into this graph, you'll see it's close to a flatline situation:

There's still clearly a glass ceiling, indeed a glass cliff for politicians & business executives; there's still a gender pay gap (US general research), and workplace gender inequalities endure; while in science it's been found that women researchers are less likely than men to get major career funding grants.

Even though there's little difference between the sexes in terms of math abilities, it seems that the work environment plays a big role in putting women off, at least in the case of computer science ("computer games, science fiction memorabilia and junk food"), though I guess that Star Trek posters aren't so much in evidence in law firms!

There's a clear need for more female QCs - not least because women need female role models to encourage them into a particular field, far more than men need male role models. However, it seems that unfortunately women as well as men support the traditional gender hierarchy:

"both men and women respond in a more hostile way to a woman who violates sex-role expectations, than to one who adheres to them. Secondly, that the more an individual supports social hierarchy in general (that some people should have more power and resources than others), the more hostile they responded toward a woman who violated sex-role expectations."

By way of light relief, on a far less serious note it also seems that if you're a female lawyer your chances of becoming Queen's Counsel are about the same as your chances of getting a date with a British man, according to a recent European survey by dating agency PARSHIP. See the tables below.

Clearly creative types come out tops as far as both genders are concerned. I don't know where developers and software engineers fit into this - "scientists", I suppose?

British men most want to date…

1 Artist, writer, musician - 46%
2 Doctor - 31%
3 Teacher & nurse 28%
4 Scientists & Academics 27%
5 Lawyer 16%
6 Advertising/Marketing 14%
7 Housewife 14%
8 Journalist 12%
9 Accountant 9%
10 Sales person 9%

British women most want to date a…

1 Artist, writer, musician 35%
2 Architect 30%
3 Doctor 28%
4 Lawyer 26%
5 Scientist & Academic 22%
6 Accountant 22%
7 Engineer/surveyor 21%
8 Teacher 16%
9 Advertising/Marketing 12%
10 Pilot 9%

©WH. This work is licensed under a Creative Commons Attribution Non-Commercial Share-Alike England 2.0 Licence. Please attribute to WH, Tech and Law, and link to the original blog post page. Moral rights asserted.

Cross border authentication - security issues - ENISA report

ENISA have been busy lately. They've just released a report on Security Issues in Cross-border Electronic Authentication (by Dirk Hartmann and Stephan Körting, HJP Consulting GmbH, 63 pgs), see summary.

These issues are clearly important given the EU goal of improving the interoperability of electronic identification and authentication systems with a view to enabling cross border management of citizens' identities, improving administrative efficiency, accessibility and user-friendliness, and reducing abuse and fraud as well as costs.

Their report analyses the current position (highlighting legal i.e. mainly data protection as well as technical issues), evaluating the security risks of electronic authentication in cross-border solutions by reference to 2 case studies (on which more below).

Not surprisingly, they conclude that data protection differences and the legal and contractual framework pose a challenge, but so do secure credentials, cross border authentication of system participants (service providers), the general security of online connections, technological differences, and agreeing a common security policy for (application-specific) electronic cross-border transactions.

The report looked at two projects offering cross-border authentication, as case studies:

  • Netcards/EHIC (European Health Insurance Card) - electronically readable European health insurance card to facilitate access to health care services for insured European citizens during temporary stays abroad, and
  • Stork (Secure idenTity acrOss boRders linKed) - pilot project to simplify administrative formalities by providing secure online access to public services across EU borders.

See also ENISA's Nov 2009 position paper on privacy & security risks when authenticating on the internet with European ID cards.

©WH. This work is licensed under a Creative Commons Attribution Non-Commercial Share-Alike England 2.0 Licence. Please attribute to WH, Tech and Law, and link to the original blog post page. Moral rights asserted.

Data abundance - Economist articles

The Economist magazine has a couple of short articles on:

  • the data deluge - including open data, data sharing, linking & data mining, data losses & privacy breaches, and recommending transparency, data breach notification and security audits with published results.
  • managing data & information overload -  I like "data exhaust" better than "digital footprint", and the prediction that the sexiest job will be "statistician", i.e getting "wisdom" out of the mountains of data!

©WH. This work is licensed under a Creative Commons Attribution Non-Commercial Share-Alike England 2.0 Licence. Please attribute to WH, Tech and Law, and link to the original blog post page. Moral rights asserted.

Hitler & Cloud Computing Security…

A couple of days ago EU cybersecurity agency ENISA posted a video on cloud computing to go with their excellent late 2009 papers on the subject, namely -

I was going to blog that video but it seems to have disappeared behind a login screen for some reason. Hopefully they'll release it fully another time. Meanwhile, enjoy this video instead!

UPDATE - the ENISA video is now available! It "gives an introduction to ENISA's Risk assessment and assurance framework for cloud computing in the words of the experts who contributed to the report."

©WH. This work is licensed under a Creative Commons Attribution Non-Commercial Share-Alike England 2.0 Licence. Please attribute to WH, Tech and Law, and link to the original blog post page. Moral rights asserted.

Thursday, 25 February 2010

Behavioural biometrics / marketing - ENISA briefing

ENISA Briefing: Behavioural Biometrics (by Giles Hogben, 10 pgs): "an introduction to the possibilities offered by behavioural biometrics, as well as their limitations and the main issues of disagreement between experts".

Particularly topical given the recent news about identifying people based on how they type, although behavioural biometrics, which includes gait and blinking pattern as well as keystrokes, voice or text style and, more subtly, ECG or EEG patterns, of course has benefits from a security & authentication perspective.

From the key points:

  • "Some behavioural biometrics, require specialised and sometimes highly obtrusive equipment which may be off-putting to users.
  • Other behavioural biometrics on the other hand offer a completely unobtrusive technique to identify or classify individuals. Such unobtrusiveness may be challenging from the point of view of collecting user consent, as required by law in many jurisdictions.
  • Data collected by behavioural biometrics may be used for secondary purposes which can involve the processing of highly sensitive data which may be inferred from the data collected.
  • Behavioural biometrics are vulnerable to several spoofing attacks."

The briefing also notes the overlap with behavioural marketing ("The same data which might allow the detection of anomalous behaviour for intrusion detection purposes – e.g. keystroke dynamics, haptic feedback etc..., could also be used to classify individuals for marketing purposes.") and the possibility of developing privacy-enhancing technologies to limit the exposure from collected behavioral profiles.

©WH. This work is licensed under a Creative Commons Attribution Non-Commercial Share-Alike England 2.0 Licence. Please attribute to WH, Tech and Law, and link to the original blog post page. Moral rights asserted.

Monday, 22 February 2010

Links of interest 22 Feb 2010

Not had time to blog much lately, so here are some links to recent developments of interest, in no particular order - and this is just what I've come across in the last 2 weeks or so!

Privacy & security

  • Your typing style could identify you uniquely - yet another way to identify an individual internet user through the cadence or rhythm of their typing, another weapon for the de-anonymisation armoury. See Ars Technica, CitMediaLaw comments.
  • 10 information security tips for employees developed by ENISA "with the aim of focusing employees' attention on information security and allowing them to recognise IT security concerns and respond accordingly"
  • Tracking people - Autonomous Production of Images based on Distributed and Intelligent Sensing (APIDIS) system for tracking ball and players in sports matches "could also be useful for surveillance, when it could track groups of people on CCTV networks"
  • Forging passports (including British) to use in relation to an assassination is scary indeed. See Amberhawk, Reuters. (ID cards can be faked, techies knew that even if the UK government didn't seem to want to.)
  • Internet safety
  • Linking offline shopping behaviour to online ads - Yahoo & Sainsbury's Nectar make deal allowing online advertisers to target consumers based on their high street purchases, linking high street supermarket spending with the consumer's Yahoo! login (though it appears to be opt-in, at least) - IAB
  • Webcam spying - the stuff of movies, someone spying on you through your computer's webcam and mic, but a US school seems to have been watching students (and their families?) at school and and home using school-supplied laptops, and rightly have been sued - BoingBoing; the BBC have picked it up; Ars Technica say the school's backed down.
  • Ubercookies and identifying website users - Arvind Narayanan describes how "ubercookies" can be used to identify visitors - first, the history stealing and group membership correlating technique I mentioned previously, then more sophisticated attacks using what you share and other "footprint" traces you leave on the web; and next a bug in Google Docs (which Google said they'll fix) that lets sites identify you too.
  • Security - Chip & PIN cards can be used without knowing the PIN - Light Blue Touchpaper
  • DNA retention boo boo - 5 case studies submitted by Home Office to MPs to justify retention of innocent people's DNA were actually 4 with one being included twice… ComputerWeekly
  • ACTA (Anti-Counterfeiting Trade Agreement) negotiations -
  • Government, business and social networking logins stolen through Kneber botnet virus - Reuters, ComputerWeekly
  • - lots of coverage of this site which aims to raise awareness that announcing your location publicly online, including the fact that you're not at home, may not be a good idea, particularly with the rise of location related services or games like FourSquare - BBC, TechCrunch
    • Broadstuff: "I took one of the people on the first PleaseRobMe screen I looked at… and found their home address via a quick use of Twitter and Google. Took 5 minutes or so (the person was about the 10th I tried). You could fairly quickly build some algorithms to automate that mashup process".
  • People's locations & movements are predictable - study of "cellphone traces" showed that "regardless of whether a person typically remains close to home or roams far and wide, their movements are theoretically predictable as much as 93 per cent of the time." This USstudy made use of cellphone records collected for billing purposes and anonymised, but of course I wouldn't be surprised if someone didn't manage to de-anonymise them…
  • Top 25 programming errors that jeopardise security, updated. ComputerWeekly said New York State is updating its procurement terms (application security procurement language) to address these top 25 errors, with other states to follow. Will the OGC ensure UK government procurement requirements are updated too?
  • Google Buzz privacy debacle (exposing key Gmail contacts & Google Reader shared items to the world, etc) & complaints galore -
  • Data protection audits - the ICO will have more powers come April 2010 including auditing powers; they've issued for consultation a draft Code of Practice on Assessment Notices as to how they'd conduct audits. Out-Law report.
  • Model contractual clauses for transfer of personal data outside the EU - recently modernised. Helpful for multi-national businesses especially for subcontracting & out-sourcing. See Out-Law.
  • CV poaching - I didn't know this was going on:
    • "…it turns out that the candidate fell victim to resume poaching; someone grabbed their resume and submitted the candidate without the candidate’s knowledge… the recruiter could lose out on a potential fill, the candidate can be disqualified by the client for shopping around (a scorched earth response – rather than attempting to sort out what happened, the client disqualifies any resume submitted more than once), and the client is put on the spot to intervene in a process they should never have been involved with in the first place…. If you do post your resume, anonymize it – make the recruiter come to you. Avoid using your LinkedIn profile as a resume (believe it or not, with enough detail an unscrupulous recruiter will just make the resume for you. The key is to just summarize your experience)."

Other mobile / comms stuff


©WH. This work is licensed under a Creative Commons Attribution Non-Commercial Share-Alike England 2.0 Licence. Please attribute to WH, Tech and Law, and link to the original blog post page. Moral rights asserted.

Friday, 19 February 2010

Photographing the police or public places - Met guidelines improved

Summary of changes

In recent years it's been harder to take photographs in public places or to photograph the police in the UK, because some police officers have controversially used anti-terrorism powers to stop photography and/or delete pictures, whether taken by journalists or members of the public. This obviously curbs freedom of speech.

The good news is, sometime during the last week or so the London Metropolitan police changed their Photography advice for the better. (Their version last year had already been amended following criticism; these are further changes.) All this ought to apply to filming or videoing too.

The updated guidelines now recognise that:

  1. It's important that the public and media have the freedom to take photos.
  2. Police shouldn't delete images from a digital camera or destroy film without a court order.
  3. What really matters is whether the info from the photo "is, by its very nature, designed to provide practical assistance to a person committing or preparing an act of terrorism".
  4. It would normally be unlawful to arrest people photographing police officers in the course of normal policing activities, including protests - because normally there wouldn't be grounds for suspecting the photographs were being taken to provide practical assistance to a terrorist or intending terrorist.
  5. While a police officer can ask why you're taking their photo, they can only do that "for a lawful purpose" and mustn't do it a way that "prevents, dissuades or inhibits the individual from doing something which is not unlawful".

Of course it's up to a court as to how to interpret the legislation, but the amendments to the guidelines can only be helpful to photographers and members of the public taking photos in public venues or at public events such as demonstrations.

I wonder if the improvements to the guidelines are related to the £5k the Thames Valley Police which had to pay out in Jan 2010 to a photographer who was arrested after trying take photos of a road accident, or the lawsuit by a film-maker who was handcuffed and detained after filming police officers on her mobile phone…- see the interview where she recounted her experience.

While it's good that the Met are now emphasising that the power to prevent photography of the police or in public places has to be exercised properly, they also need to make sure that officers on the ground understand what they can or can't do - as this incident last year illustrates. And while I agree that lots of police officer are very helpful and sensible people - and not just because I've had friends who worked in the Met! - officers in the street do need to be proactively kept informed by their superiors, especially about things as important as this.

There are however still issues with section 44 Terrorism Act stop and search powers being misused, e.g. the British Journal of Photography's report of Lord Carlile's Report on the operation in 2008 of the Terrorism Act 2000 and of part 1 of the Terrorism Act 2006, 18 June 2009 (para 196 onwards of that report also deals with photography).

The text of the changes

Here are the changes to the Met guidelines, for anyone interested -


One paragraph now reads (emphasis added):

"We encourage officers and the public to be vigilant against terrorism but recognise the importance not only of protecting the public from terrorism but also promoting the freedom of the public and the media to take and publish photographs."


"We encourage officers and the public to be vigilant against terrorism but recognise the balance between effective policing and protecting Londoners and respecting the rights of the media and the general public to take photographs."

Deleting images from digital cameras or destroying film

Now reads (2nd sentence is new):

"Officers do not have the power to delete digital images or destroy film at any point during a search. Deletion or destruction may only take place following seizure if there is a lawful power (such as a court order) that permits such deletion or destruction."

Section 58A Terrorism Act

Again emphasis added, bold bits are new, notes in italics:

"Section 58A of the Terrorism Act 2000 covers the offence of eliciting, publishing or communicating information about members of the armed forces, intelligence services or police where the information is, by its very nature, designed to provide practical assistance to a person committing or preparing an act of terrorism. [the bit in bold was added]

Any officer making an arrest for an offence under Section 58A must be able to demonstrate a reasonable suspicion that the information was, by its very nature, designed to provide practical assistance to a person committing or preparing an act of terrorism [was, "the information was of a kind likely to be useful to a person committing or preparing an act of terrorism"].

It would ordinarily be unlawful to use section 58A to arrest people photographing police officers in the course of normal policing activities, including protests because there would not normally be grounds for suspecting that the photographs were being taken to provide assistance to a terrorist. An arrest would only be lawful if an arresting officer had a reasonable suspicion that the photographs were being taken in order to provide practical assistance to a person committing or preparing an act of terrorism. [Was, "It should ordinarily be considered inappropriate to use Section 58a to arrest people photographing police officers in the course of normal policing activities, including protests, as without more, there is no link to terrorism."]

There is ["however" was deleted] nothing preventing officers asking questions of an individual who appears to be taking photographs of someone who is or has been a member of Her Majesty's Forces (HMF), Intelligence Services or a constable so long as this is being done for a lawful purpose and is not being done in a way that prevents, dissuades or inhibits the individual from doing something which is not unlawful.

Following these guidelines means both media and police can fulfill their duties without hindering each other."


Note that the previous version's wording is taken from Google's cached snapshot of the Met's photography guidelines page as at 11 Feb 2010, so may have been updated to the latest by the time you click the link to the snapshot.

©WH. This work is licensed under a Creative Commons Attribution Non-Commercial Share-Alike England 2.0 Licence. Please attribute to WH, Tech and Law, and link to the original blog post page. Moral rights asserted.

Digital Economy Bill - initial obligations code outline, draft Costs SI

New on the Digital Britain website are papers on the Digital Economy Bill (see my redline showing the current Digital Economy Bill marked against the original version) including:

  • Outline of the proposed Initial Obligations Code which will govern ISP notifications to alleged file-sharers, discussing issues which might be covered by the code, such as:
    • Who can issue a CIR? (copyright infringement report)
    • Standards of evidence required
    • Timescales for submitting and actioning CIRs
    • What notification letters to ISP subscribers must contain
    • Copyright infringement lists
    • Code enforcement procedures
    • Appeals procedure
    • Possible grace period for ISPs
  • Draft Statutory Instrument - draft Online Infringement of Copyright (Initial Obligations) (Sharing of Costs) Order 2010 on the important issue of how much copyright owners must reimburse ISPs for notifications to their subscribers -for "notification costs" it's "a fixed sum [set by Ofcom] per copyright infringement report sent to any qualifying ISP", and there are also "qualifying costs" to be paid including in relation to subscriber appeals. Note that the draft is:
    • "designed to give an idea of how the cost issues could be approached. The proportional split included in the draft is a working assumption. Before laying the SI, we will conduct a full consultation"
  • Letter from Lord Young to Jack Straw regarding consumer complaints about threatening copyright infringement letters from ACS Law on behalf of copyright holders, whether owners may overstep the line in trying to pursue alleged infringements, and concerns about the standards of evidence expected under the Digital Economy Bill. (A proposed amendment and discussion in the Lords about "groundless threats" is here as the link in the letter isn't clickable.)

Via @digitalbritain.

©WH. This work is licensed under a Creative Commons Attribution Non-Commercial Share-Alike England 2.0 Licence. Please attribute to WH, Tech and Law, and link to the original blog post page. Moral rights asserted.

Thursday, 18 February 2010

De-anonymization illustrated; and guessing gender from web browser history

Wise Woman's Words recounts a quick but telling practical experiment she did with some students, which brought home to them how easy it is to re-identify people from limited data about them (age, sex, country of birth, bachelor's degree program and city they graduated in - the last 2 she said were unnecessary).

What a good way to illustrate the point.

She also mentioned a site I hadn't come across, which peeks at your web browser history to guess your gender. (You'll recall from my blog about de-anonymisation through group membership that it's possible for sites to check whether you've visited specific webpages, through your browser history.)

With the browser I use the most at home, Chrome, it was accurate - especially as I visit all sorts of sites linked to in my Google Reader feeds via Chrome.

But it got it wrong with Firefox, Internet Explorer and Opera - too many computing and programming-related sites visited through those browsers, I suspect!

©WH. This work is licensed under a Creative Commons Attribution Non-Commercial Share-Alike England 2.0 Licence. Please attribute to WH, Tech and Law, and link to the original blog post page. Moral rights asserted.

Thursday, 11 February 2010

Transfer of EU banking data to US - interim agreement rejected

The European Parliament today voted down the interim agreement on SWIFT allowing US authorities to access EU citizens' banking data due to "concerns for privacy, proportionality and reciprocity".

The timing of the signing of the interim agreement, literally just before the European Parliament were due to get greater powers in the EU, can't have helped either.

A new agreement with the USA will be negotiated (see the Commission's reaction), and meanwhile a Mutual Legal Assistance Agreement will be utilised to exchange financial data for anti-terrorism purposes.

©WH. This work is licensed under a Creative Commons Attribution Non-Commercial Share-Alike England 2.0 Licence. Please attribute to WH, Tech and Law, and link to the original blog post page. Moral rights asserted.

Wednesday, 10 February 2010

Consent & revocation - EnCoRe paper

The EnCoRe ("Ensuring Consent & Revocation") project, a UK inter-disciplinary research project into informational privacy which I've mentioned before in this blog (e.g. in the Data Dozen of Identity Management for Privacy post), have produced a paper "Technical Architecture for the first realized Case Study" (138 pgs).

This paper defines the EnCoRe Technical Architecture for a case study namely a hypothetical Enhanced Employee Data Scenario - the use by employees of an organisation of a Web 2.0-style service for work-related and personal purposes and its related consent management requirements.

Appendix A of the paper sets out some example Use Cases (e.g. employee gets hired, promoted, changes their personal data, gets demoted etc) and some general legal input (Data Protection Act 1998 and Data Protection Directive principles) on those use cases.

I've not had a chance to read it yet, but from the summary:

"The scope of the EnCoRe Technical Architecture for this first Case Study encompasses all the technical functions required for the management (including capture and revocation) and enforcement of individuals' consents that are pertinent to the Case Study's scenario. The technical architecture is the block-level design of the necessary technical system, at the level of functional blocks (i.e., software and service components) and the data flows between them and to/from humans, other technical systems, compliance and other business processes and regulatory environments. Its goal is to provide the basis for an EnCoRe reference implementation that validates the approach and the technology. To that end this document's approach is to start with contextual information and overviews, and incrementally refine the level of detail."

©WH. This work is licensed under a Creative Commons Attribution Non-Commercial Share-Alike England 2.0 Licence. Please attribute to WH, Tech and Law, and link to the original blog post page. Moral rights asserted.

Transfer of EU banking data to US - Council issues declaration

The Council of the European Union have just issued a declaration in defence of the interim EU-US Agreement on the Transfer of Financial Messaging Data for purposes of the Terrorist Finance Tracking (i.e. agreement to provide info on EU citizens' banking transactions to the US authorities).

This seems perhaps to have been prompted by negative press about the Agreement being signed just before the European Parliament were due to have more powers and more say upon the Lisbon Treaty coming into force.

©WH. This work is licensed under a Creative Commons Attribution Non-Commercial Share-Alike England 2.0 Licence. Please attribute to WH, Tech and Law, and link to the original blog post page. Moral rights asserted.

Tuesday, 9 February 2010

Digital Economy Bill - with changes blacklined

The Digital Economy Bill was reprinted yesterday, as amended in Committee in the House of Lords.

To help me figure out the agreed changes from the original version of the Bill, I produced a quick and dirty, very basic blackline (or redline, tracked changes or whatever you want to call it) of the amended Bill.

In case it's of use to anyone else, here it is, showing additions and deletions but not in colour (and not guaranteed to be 100% error free, of course):

Digital Economy Bill (as amended in Committee in the House of Lords 9 Feb 2010), showing changes from the originally introduced version of 20 Nov 2009

In relation to the internet access of suspected copyright infringers, you'll see the main changes are in:

  • 5 Obligation to provide infringement lists to copyright owners
  • 6 Approval of code about the initial obligations
  • 7 Initial obligations code by OFCOM in the absence of an approved code
  • 10 Obligations to limit internet access: assessment and preparation
  • 11 Obligations to limit internet access
  • 15 Sharing of costs
  • 17 Power to amend copyright provisions - the most changes are here

There are other changes e.g. to the provisions on:

  • internet domain registries and
  • orphan works and copyright licensing

but from a quick skim no changes to the original provisions on video games or public lending right.

©WH. This work is licensed under a Creative Commons Attribution Non-Commercial Share-Alike England 2.0 Licence. Please attribute to WH, Tech and Law, and link to the original blog post page. Moral rights asserted.

Mobile social networks - ENISA's 17 golden rules for privacy and security

EU cyber-security agency ENISA have just issued a report on mobile social networking - "Online as soon as it happens" (PDF) (49 pgs). ENISA's reports are usually excellent - well-informed and clear without being too jargon-ridden - and from a quick skim this looks to be on form. From the summary:

"The report describes the social networking world and the mobile phone services allowing the users to experience the social networking sites (SNSs) on their handset, also illustrating the major risks and threats connected to their use. While many of the privacy issues originating from the web-based access to SNSs also apply to mobile social network s, there are also a number of unique risks and threats against mobile social networks. The report aims to provide a set of recommendations for raising the awareness of social networks users and in particular of social mobile users of the risks and the possible consequences related to their improper use."

Apt as today is Safer Internet Day 2010.

The report includes a section on the EU Data Protection Directive and the article 29 working party's opinion 5/2009 on social networking, the applicability of the Directive to non-EU social networks, and the question of whether the SNS user is responsible for compliance with the Directive (user as data controller and the implications of that). These are of course some of the most problematic data protection issues relating to social networking websites.

Here are their recommended golden rules "to raise awareness about the risks and threats related to the misuse of social networks, in particular when accessed through mobile phone, with advice on how to avoid unwanted consequences":

Golden rules





Pay attention to what you post and upload


Consider carefully which images, videos and information you choose to publish

Remember that a social network is a public space; only post information or upload images you are comfortable with, keeping in mind that at a later stage you might be confronted with the content you uploaded, e.g. in a job interview. Information and pictures you post online should be considered permanent. They can be copied and stored by other individuals and can resurface years later in search engines.


Never post sensitive information

Do not make information such as address, date of birth or financial data available in your profile. A criminal might access your profile and steal your identity.


Use a pseudonym

You do not need to use your real name in an online profile. Using a nickname can help you protect your identity and privacy; only close contacts will know who is behind the nickname.

Choose your friends with care


Do not accept friend requests from people you do not know

Be selective about who you accept as a friend on a social network. You do not have to feel obliged to add someone to your friends’ list. Politely refuse or simply ignore the request.


Verify all your contacts

Ensure that the people you are in contact with or who sent a friend request are really who they say they are. Do not trust them immediately.

Protect your work environment and avoid reputation risk


When joining a social networking site use your personal e-mail address

Do not use your company e-mail address but your private one and do not post confidential or competitive information about your organization. Be careful about the information you reveal about your workplace, for example do not post pictures shot in front of your office with the company's address or logo on the background that may lead to your job or workplace address.


Be careful how you portray your company or organisation online

Consider what your employer would think before posting any comments or material online about your company or organisation.


Do not mix your business contacts with your friend contacts

You have no control over what your friends may post online or how they may portray you and consequently what your employer, colleagues and clients may be exposed to.

Protect your mobile phone and the information saved on it from any physical intrusion


Do not let anyone see your profile or personal information without your consent

Before accessing your profile through your mobile phone pay attention to the environment and people that are surrounding you. If someone is trying to see what you are doing access your profile in a safer place.


Do not leave your mobile phone unattended

Someone with malicious intent could update your profile and status with false details. Remember to log out from the social network once your navigation is over and not to allow the social network to remember your password (this function is called ‘Auto-complete’).


Do not save your password on your mobile phone

Mobile phones can be easily lost or stolen and if you save your password on your mobile device anyone who may have possession of it can access your profile, see your pictures and friends. Try to commit your password to memory and if you write it down be careful where you store it.


Use the security features available on your mobile phone

Remember to lock the keypad when not in use and to protect the device with a PIN or a password. Backup your details to another device such a PC in case your mobile phone is lost or stolen. Configure connections (such as Bluetooth and Wi-fi), especially in airports and public spaces, to be secure and if your mobile device has a built in firewall remember to enable it.

Respect other people’s privacy


Be careful what you publish about someone else

Do not upload pictures or personal information regarding other people without their consent. You might commit a criminal offence.

Inform yourself


Read carefully and in full the privacy policy and the conditions and terms of use of the social network you choose

Always be informed about who provides the service and how your personal information will be used and who has the right to access the information you post.

Protect your privacy with the privacy settings


Use privacy-oriented settings

Set the profile privacy level properly. Check the privacy settings of your profile — who can see your pictures, who can contact you and who can add comments in order to avoid making your profile available to everyone.

Report immediately lost or stolen mobile


Be careful when using your mobile phone and pay attention to where you put it

Report immediately stolen or lost mobile phone with contacts and pictures saved in its memory and personal information regarding you and your friends (e.g. those friends whose contacts on the SNS have been synchronized with the mobile phone) and change the passwords on the social networks your are a member of.

Pay attention to the location based services and information of your mobile phone


Deactivate location based services when not using them.

Remember to deactivate location based features of your mobile phone if you don’t need them.

©WH. This work is licensed under a Creative Commons Attribution Non-Commercial Share-Alike England 2.0 Licence. Please attribute to WH, Tech and Law, and link to the original blog post page. Moral rights asserted.

Friday, 5 February 2010

Providing EU personal data to US law enforcement authorities - consultation

The European Commission are consulting publicly (see full consultation paper and questions) on the future EU-US international agreement on personal data protection and information sharing for law enforcement purposes.

There are consultation questions on the purpose, scope, reciprocity and data protection implications of the trans-atlantic agreement, including "Should the agreement only cover government-to-government transfers of information? Or should it also be applicable to transatlantic transfers of personal data from private entities to law enforcement authorities?" Accountability is of course another major issue.

See for more background info the report of the High Level Contact Group on information sharing. The future US-EU bilateral agreement was mentioned by the European Data Protection Supervisor and was one of the issues touched on in the Article 29 Working Party's paper on the Future of Privacy.

Given the controversy over the vote in the European Parliament's civil liberties committee regarding the necessity and proportionality of the SWIFT agreement to transfer banking data of EU citizens to the USA, we can probably expect strong responses to this consultation.

UPDATE: I seem to have spotted this before the press release was issued. Here's the press release.

©WH. This work is licensed under a Creative Commons Attribution Non-Commercial Share-Alike England 2.0 Licence. Please attribute to WH, Tech and Law, and link to the original blog post page. Moral rights asserted.

Thursday, 4 February 2010

Online copyright enforcement vs data protection

A Study on Online Copyright Enforcement and Data Protection in Selected Member States prepared by law firm Hunton & Williams, Brussels for the Commission looks useful - it's just been added to the EU web page listing various documents about IP rights enforcement:

"This study provides an overview of the legal situation regarding the interaction between online copyright enforcement and data protection at the European Union level and in six selected EU Member States, namely Austria, Belgium, France, Germany, Spain and Sweden. It has been prepared on behalf of DG Internal Market and Services of the European Commission by the Brussels office of Hunton & Williams, with the assistance of local counsel, and in the context of the “Stakeholders’ Dialogue on Illegal Uploading and Downloading” organized by DG Internal Market and Services. This study was purposely kept brief, and is not intended to provide an exhaustive analysis.

The first section provides a brief overview of the legal  instruments relevant to online copyright enforcement and data protection, and of whether the European legal framework for data protection presents a barrier to the fight against online copyright infringement…

The second section analyzes issues regarding the interaction of data protection and online copyright enforcement in six selected EU Member States as of September 1, 2009."

©WH. This work is licensed under a Creative Commons Attribution Non-Commercial Share-Alike England 2.0 Licence. Please attribute to WH, Tech and Law, and link to the original blog post page. Moral rights asserted.

Wednesday, 3 February 2010

2nd FTC Privacy Roundtable 2010

I'd like to point out Arvind Narayanan's report of the 2nd FTC Privacy Roundtable 2010 - very interesting reading, particularly his views on Facebook. Oddly enough he was the only academic computer scientist on the panel.

Those who wish may compare Facebook's statements on the FTC panel with the comments made by Richard Allan on behalf of Facebook in the recent panel session on internet rights with Google,Vodafone and Open Insights at the London School of Economics.

(For anyone who doesn't know, Arvind Narayanan is one of the authors of various seminal papers on de-anonymisation and re-identification, a few of which I've already mentioned on this blog.)

©WH. This work is licensed under a Creative Commons Attribution Non-Commercial Share-Alike England 2.0 Licence. Please attribute to WH, Tech and Law, and link to the original blog post page. Moral rights asserted.

Internet rights, online privacy: views of Facebook, Google, Vodafone, Open Insights at LSE panel session

The future of internet rights was discussed at the London School of Economics on 25 Jan 2010 by a distinguished panel from some of the best known names in the digital world -

Issues discussed in this fascinating panel session included -

  • government requests for information; and policy making
  • innovation and privacy
  • advertising, free content and business models for the internet age
  • the cloud
  • deletion of personal data, and
  • consumer attitudes vs actions.

Questions from the floor included, why don't Google encrypt Gmail, the implications of personal data being around forever, etc.

There's an MP3 podcast of the discussion (about 92 mins) - podcast page; and (click the arrow to stream immediately) direct link to 43 MB MP3. (Would be good if Google were to release the Google Voice transcription technology or use it to transcribe and index MP3s like these!)

I've finally finished writing it up - based only on scrawled notes (not had time to listen through the podcast again), so while I believe I've captured the gist of it it's not guaranteed to be verbatim or even 100% correct or complete. Sometimes I've paraphrased for brevity or clarity and I've square bracketed bits I couldn't catch. If anyone thinks I've made any errors or not reflected anything properly, please let me know.

Note that I don't necessarily agree or disagree with what was said, I've just tried broadly to reflect the session accurately, without comment. I thought Dr Fayyad was particularly insightful though. I've added a few links and notes for ease of reference.

1. Intro

The panel was chaired by Gus Hosein (GH), of the Information Systems and Innovation Group, Department of Management at the LSE (he's also Policy Director at privacy NGO Privacy International, although he wasn't chairing the panel in that capacity).

He showed a slide of what he considered to be the main themes related to the social study of ICTs and innovation, namely (in no particular order):

  • Design & development of the ICT artefact
  • Information risk & security
  • Global sourcing
  • ICT globalisation and developing countries
  • ICTs in public sphere
  • Technical & organisational innovation.

Having displayed a slide with quotes from -

Security expert Bruce Schneier "And it's bad civic hygiene to build technologies that could someday be used to facilitate a police state." [Note: from his CNN article]

Iranian police commander General Esmail Ahmadi-Moghaddam “Attending illegal gatherings, rioting and insulting the sacred are reason for police reaction. Those who organize the gathering commit a bigger crime.” [Note: found it in BusinessWeek]

- he put a series of grouped questions to the panel.

2. Within the law

  • How do you cooperate with government requests?
  • How do you participate in the policy making process?
  • Victims or deputies?


Some questions are better addressed by General Counsel so instead she focused on corporations' responsibility to engage in public debate and try to inform policy.

How search engines work and use data is rocket science to most people and for a public debate about that data the rocket science must be explained in such a way that it makes sense to policy makers, individuals, the media etc, and that kind of explanation and transparency must be built into the product.


This is one area where the internet is distinctly national rather than international. What really matters is where the data is, its home jurisdiction. Facebook data is held in the USA so US law prevails and Facebook has a responsibility to offer up the data if asked for it by lawful US authority, whereas it would have a legal problem if it gave it to Iran!

On top of the law, there's a layer of TOS and privacy policy, e.g. the service must not be used for illegal purposes. Most internet companies mean by that, don't use it for purposes that are illegal in countries where we respect the law, but if it's illegal in a country like Iran we won't stop you! So there's differential enforcement.

Another issue is, do you have bodies on the ground that countries can get to as well as data or computers they can take away? The UK authorities can use the Police & Criminal Evidence Act but the courts won't allow it if it can't be enforced because the data is kept outside the UK.

It's necessary to discuss what it's reasonable to request or to give in the light of the privacy policy, and that sort of discussion is taking place between all internet companies and countries and typically where it's Western Europe they'll come to an agreement, where it's Iran they don't.


It's a real mess with laws and jurisdictions and where data lives. E.g. when he was with Yahoo the Chinese government asked Yahoo to surrender information regarding the use of email and at the time it wasn't clear what was wanted so they provided routine provisioning of data [?] on legal request. They had to obey because they had to operate within China's rules in order to have a presence in the country - have a bank account, charge, do business etc - and Yahoo wanted to do business there as did Microsoft and Google.

It's amazing that people don't pay much attention to the fact that in almost every country including the US, with the exception of a few Western countries, governments impose pretty draconian conditions, restrictions and demands on operators of big systems - ISPs, email providers, public info providers on the internet. US requirements especially post 9/11 are scary.

Why does that mess exist, why does every government say it has rights to see every email etc? The deeper reason is that the infrastructure, connectivity and lots of services are essentially "free" to consumers, it's frontier territory where anything goes. Contrast with a paid service where the position is understood, and relationships are with adults who can sign, etc.

We need debate to educate the public and lead to regulation that's rational.

Almost all the technology used today originally had military or state funded purposes. The problem is not the technology, but the lack of clarity on policy, The internet is more liberating than enslaving, a democratising force.


Vodafone is in many countries with people on the ground, regulated networks, easily garnished assets, so it's difficult to conduct a campaign of civil disobedience.

So they ensure that at a group level they provide advice and guidance to local operating companies to ensure they understand when they must comply and when there is room to push back - and sometimes they do push back. Even if they try to withdraw from a country, there'll be local competitors that may not have the motivation to be as protective of rights.


If Google pulled out, companies left behind with major consumer use may have worse control over content.

3. Innovation

  • What developments have been most influential in changing privacy?
  • How are these forms of business regulated?
  • Is privacy law inhibiting innovation or creating a safe space?

Profits are up at Google, Vodafone, Yahoo. The New York Times has a useful table comparing Google, Microsoft, Amazon and Yahoo. At least 3 of them are in the space of apps. There are privacy law challenges especially EU privacy law, which might inhibit innovation e.g. cloud computing, advertising, other business models.

Only 1 company from the panel is within the EU's jurisdiction i.e. Vodafone, who are launching an app store and have to follow rules which Facebook needn't in relation to apps.


The EU Data Protection Directive was enacted in 1995 and those laws may not necessarily be applied in the same way now, with apps that run on mobile devices from different companies who are not the network operator. The app developer could be in a garage. Who is the data controller, who is the data subject, who makes decisions on how information is used or shared? Data protection rights are the greatest challenge.


Google has strong ties with the open source community, often developing for the model where anyone can create code, add functionality, use, share, put it up on the internet, show ads and make money if successful, and continue to develop the functionality. It's difficult to have a clear dividing line between amateur and professional.


There are stories about Facebook apps, who develops them and how they're using the info. What about consumer protection?


Bear in mind that apps have their own jurisdiction - UK law if a UK company develops the app. The fundamental question we need to address is, data protection law was designed having in mind big organisations as data controllers and small citizens, and one to one transactions with a particular data subject in a particular jurisdiction.

But in the apps world there are multiple players in multiple jurisdictions and increasingly data controllers could be data subjects too. E.g. a photograph that someone doesn't like could be posted online controlled by a particular user using services from different providers - geotagging, maps, ads, browser, phone platform.

Privacy vs freedom of expression, rights of photographed vs rights of photographer - if someone objects to a photo do they go to Facebook, Picasa or Flickr to get it removed, or to the taker of the photo who is really the data controller, and deal with it at the source?


The law hasn't kept up with technology and when technology moves faster than legislation it's best not to interfere till you figure out what you want to interfere with. No government knows how they want to interfere.

There are 2 key principles here:

  1. Informed consent - the consumer should know and the info should be available and discoverable (it's a separate question whether government should force consumers to learn).
  2. Opt in - the consumer should positively say yes I do want you to track me. Consumers do this willingly with loyalty schemes, mileage points etc because there's a perceived value. If companies figure out how to explain the value, and the value is there (Google spends millions to ensure its index is fresh and complete), and in return for using a particular service the consumer will get particular rights, once we have a value exchange and the information is available we will have a clean equation. Once we have an opt in system, no reasonable government would interfere.


You'd need a separate panel session just on opt in or out!

It shouldn't be government regulators making the decision because how would they enforce it?

Possibly it's better to encapsulate basic principles already in the Data Protection Directive through Privacy By Design - ensure that when you build the technology certain principles that protect privacy rights are embedded in the technology. [Note: see my thoughts on PETs and posts on privacy enhancing technologies generally.]

The best way is through industry standards. Vodafone are working with the [PSMA? CTIA?] on developing guidelines for the wireless telecoms industry.


There first needs to be agreement on the meanings of "opt in", "opt out", etc, or you may regulate out basic protections against advertising spam, distributed denial of service attacks, and click fraud.

4. Advertising

  • Is advertising still key to a free internet?
  • How will policy shape this space?

Previously it was thought that advertising will make everything free - in exchange for giving up privacy, get free services eg search on the basis that info can be used and processed for advertising purposes.

But recently the New York Times announced a move away from free model to a '90s subscription concept.


Google's always been excited and proud that it successfully enabled many individual sites to put up useful content for people and made that economically viable.

Now traditional industries are moving to internet models but it's not just a case of whether advertising supports quality content for free on the internet.


With the start of the internet something strange and transient happened and we're beginning to see it unwinding, i.e. the feeling that everything is free.

Think of it from an economic perspective. Google AdSense, Yahoo content match, Microsoft content advertising were designed to enable the public to put up ads on their sites and share revenue. Historically that's not worked to the advantage of the publishers partly because Google, Yahoo etc are more advanced in the technology and understand the medium better than older companies whose roots are in a different world.

If the New York Times are spending money to develop content, paying reporters to stay neutral and report for the sake of truth and trying to separate advertising from content, there's a cost to that.

Most of the value of news is in the headlines and if consumers have a way to get that for free without much effort, why should they be paying for it?

The New York Times's economic rationale is, they've seen newspapers fall apart because the model was insufficient to support them. The Wall Street Journal recognised this early on and while others embraced the free model the Wall Street Journal said only a few things would be free, the archive is available to subscribers only. And this became the strength of their online business, even over their offline business.

But someone needs to pay the piper or these things will go away and we'll all be worse off in a world without professionally generated content. We won't get movies on an advertising model, not within the next decade.

There will be a mix, the world will adjust, with less money for publishers and value in [?]


Facebook and Google share the trait of not making money from user subscriptions. How does Facebook make money?


It's the same but different, it's advertising 2.0. Traditional content models like newspapers, film, music are figuring it out using a mix e.g. Spotify.

Facebook, Google, Yahoo require massive infrastructures to deliver services to millions of users and it must be paid for somehow. The pure internet model typically involves advertising, it's free to users at the point of delivery and they figure out a way to make money matching people with products with people who might want to buy them, but becoming a fan rather than clicking through and paying for the connection to be made.

This is a sustainable model and consumers are comfortable with it when they get it e.g. if you like Starbucks get a connection to it for a free coffee.


Isn't mobile advertising more invasive?


Not if the advertising is relevant, if it's exactly what you're looking for.

It's a matter of debate how you get consent but people will choose to participate if they find it valuable. [Concrete rules on how to get consent, keep and use data etc.]

We're heading towards intelligent personal valets that understand our interests and act accordingly.

5. Cloud

  • Who owns information in the cloud?
  • How will information be made portable across platforms?

Once your email was on a server relatively close to you, now more and more services are mixing into the cloud. Data is held and services are run somewhere you don't know where it is, Gmail & Hotmail are in the US but which states? Google Docs, whether personal or corporate, are on server farm in cloud.


Google has a Data Liberation Front, started by its engineers. The goal of the project is that everyone using Google to hold info can take it out easily and move it somewhere else.

This makes their position clear - it's your data and you can take it away. Google has always said that its business stands or falls on the trust of its users and building in this capability will keep Google honest. If users don't stay with Google it's not because it's too hard to leave but because they still trust it.


It's not that straightforward. There's been lots of discussions about content importers, transfer of data in and out. There are lots of commercial considerations which apply as to how easy to make it to integrate different services or not -

  • technical - how easy to make it to move data - companies are discussing!
  • who owns - more fundamental - during its TOSGate last year, Facebook argued it wasn't different from other sites but people were going "You're stealing my data!" Now Facebook's statement of rights & responsibilities says it's your data, you give Facebook a temporary licence to do things it needs to do to display it, share it etc, and you can withdraw it anytime.

6. Deletion

  • What is your view on when data should be deleted?
  • Why do you keep info longer than consumers think?

3 years ago every search term you ever entered was kept indefinitely. But the retention period has started to go down. Recently Microsoft announced it would delete in 3 months, in buying a part of Yahoo - a record! [Note - can't find it, thought Microsoft recently promised 6 months.]


Facebook was designed by aspirational engineering students who never imagined anyone would want to leave, so it only enabled deactivation for students.

Then there was demand for full deletion, which has been introduced. The privacy policy now clearly points to deletion control, subject to holding on to the data for a short period, a couple of weeks or so, in case an impostor asked for the deletion or law enforcement want access to evidence someone wants to destroy.


When at Yahoo he was behind driving search deletion. He took a pragmatic look at how long it was needed for and how it was used realistically as opposed to storing it in case it might possibly be useful one day. Having considered legal requirements etc he made an aggressive recommendation to delete the data after 6 months. It ended up being 13 months.

It's a complex and scary issue. AOL released queries "for research purposes" and anonymised unique user IDs and forgot that people like to search on their own names. Knowing names it was possible to look up addresses, other searches, stuff searched on other sites. It became a big disaster. His attitude is that it's a toxic asset he wants off his hands as soon as possible!

If using data to target ads, you care about consumer activities when doing commercial things, not about what library books they read but searches to buy a car, camera etc, which is less privacy sensitive than information based requirements.

It's complicated because there are 2 barriers to eliminating data immediately -

  • legal - lots of governments including the US require retention of data for "law enforcement"
  • contractual - Yahoo and Google in their business charge for clicks so advertisers may dispute whether clicks are by real users and want refunds ie click fraud. Because they bill advertisers they need to keep data to prove the clicks were by a real human

So i's more complicated than saying the data is yours. We must be careful about what we mean by "data" and what we mean by "yours".


In terms of anonymising data Google has different considerations and concerns from Facebook which she can't go into, which means they keep data for longer.

[Note: on what data Google's search engine collects and why they retain search logs for certain periods of time, now see her "Internet Privacy" speech to Brussels policy makers on YouTube.]


He believes companies are competing to reduce the retention period and Google seems slowest with Yahoo at 13 months, Microsoft reducing to 13, Google 14 now 13, then Microsoft saying it will be 6, or 3. Google deletes IP after 9 months, cookies after 18 months. Microsoft deletes cookies after 18 months, 6 months for IP. [Note: found the Microsoft chart for Bing privacy practices.]


The most competition is in what search engine and which search results people find most useful. From initial investigations it's a best faith judgement call on the costs/benefits of certain retention periods and Google settled on 9 months

7. Attitudes vs Actions

  • Do consumers really care about privacy?
  • Polls say yes, but actions...

There's the Facebook controversy reported in the Telegraph, "privacy is no longer a social norm" i.e. Mark Zuckerberg saying that people are more comfortable with sharing more and more info.

[Note: this is the privacy paradox mentioned in my Data Dozen of Identity Management for Privacy blog post.]


Things have changed because of the internet, it's the greatest copying machine ever invented and that includes stuff about us and we have to get used to that whether we like it or not.

Much depends on how you ask the questions, you could make a scary thing out of it or trust that most people are pretty smart and happy with the deal.

Lots of issues on the deal of data for services depend on the power relation between you and the entity collecting the data, it would be different if it was the Home Office!

The public do understand and demonstrate in the way they behave that they understand the deal that they're getting. Facebook is moving with the trend, and believes it's significant.

Conversely you could say that the public are stupid and we must intervene because they're making bad choices and they're too stupid to make good choices.

The vast majority enjoy internet services and understand the reciprocity in providing access to a certain amount of info about them.


Pre-Google when she was a security usability and human factors person, people doing psychological user-focused research told her they'd heard from users that they were afraid of being stalked online, of accidentally revealing info that would allow them to be stalked.

When security people told her that people don't care about security, she said a security concern was stalking - and the security researchers said, what's that got to do with security?!

The privacy discourse still isn't there yet. The privacy frameworks being developed may not match up with what people actually do.


What's changed isn't so much social as the degree of control over info, so people may react differently to choices. If you give them tools they may or may not use them to protect privacy and security. But lots of consumers say the fact that they have the ability to control, changes the way they interact.


He agrees; he's not sure people's habits have changed. There's a great degree of consumer ignorance, they lack experience because the medium is new and they don't think about what happens when they do Facebook updates. He's stopped saying "Heading to London" because people get upset that he hasn't had time to see them all!

At Yahoo he tried to help consumers remove false references to them, articles claiming they did something bad etc. It's a nightmare to try to get Google, Microsoft, Yahoo etc to purge such things from search engine results.

We need awareness, consumers need to understand what could happen to data, the danger of disclosing to the wrong crowd at the wrong time. People can copy Facebook pictures so it's available elsewhere even if you delete it.

It's like cars, very dangerous tools but highly regulated - observe traffic lights, you can't drive on the sidewalk.

Before regulating you need awareness and understanding and it's too early, it's a wild frontier.


If consumers aren't aware, companies aren't doing a good job of talking to users.

8. Questions from audience

SSL was added to Gmail to protect mail between consumers and their servers, why doesn't Google support encryption so Google itself can't see the content stored with them?


There's nothing to prevent use of Gmail in that way, there are third party solutions which do PGP for you. The problem is management of encryption keys shared between individuals. We're not there yet in terms of making it manageable for the vast majority of Gmail users.

Also many services try to add value eg searching Gmail, translation etc and Google wouldn't be able to provide them if Google's servers couldn't work with the data.

Question: What if someone from the generation which started using the internet before realising the ramifications runs for office?


In 1992 Clinton's marijuana use was a big deal. Then Bush was silent about his drug use. Now the current President in his book admits to experimenting with cocaine. She believes ultimately we'll end up with the same environment with records online. Everyone's got that and won't be disqualified for it because if so everyone's disqualified.


France is considering a "right of oblivion", whether there should be a right to be obliterated from the internet. A recruitment code of conduct [which?] says not to look at Facebook. [Note - recruiters clearly aren't paying attention to their code then! Microsoft recently released for Data Privacy Day 2010 a research study indicating that 79% of US recruiters do check online info on job applicants.]

Clay Shirky talks about private conversation in public space - we still need to treat it as private even though it takes place in a public space.

Is civic hygiene increased because we have to be more honest? That sweaty feeling when the News of the World rings you up on a Friday and you think "Oh my god the photos have come out" - now you don't have to worry because they're out!

Some believe openness is good, previously they'd send someone to kill a dissident, now there's too many people to kill, there's too many drug users in politics to ban drug users in politics.

What about global jurisdiction issues, an international organisation?


The internet is based on private law and private commercial legal arrangements with a layer of criminal law on top. There's no international organisation.

The US internet industry lobbied against it at ITU, UN, intergovernmental level. Things work OK as they are now, with a fast moving space you don't want to introduce a complex global structure on top of it.

The threat of regulation by national government has led to self regulation e.g. search engines dispensing with historic data because of the EU threat, and we'll end up in the same place ie healthy competition from commercial companies to move into line.


With multinational US based companies subject to EU regulation we'll see equivalent privacy protections in the next couple of years; US companies are promoting it. APEC, [FIP?] etc are similarly levelling. The move to industry standards and practices, developer guidelines etc will effectively create global standards.


We can have global standards but what about international enforcement? We're seeing healthy moves on the privacy front by many governments.

We've swung too far to a world with no more privacy, Mark Zuckerberg's statement etc, and he believes there will be a backlash and some bad experiences of individuals and groups will swing it in the other direction.

There's a big fear that government may overreact early on especially with advertising before they understand the consequences and may prevent some innovations from happening. He prefers innovation to go through and reach the next level.

What about newspapers, citizen journalism and blogging, headlines?


The business model is not quite working but they don't have an alternative business model. There IS value to professional opinions, it's not just about the headline. Everyone's free to say anything and that's part of the problem.

Long term there are professional forums etc and we'll tend to trust more people who have more to lose.


Time after time people have rejected micropayments. People consume media by dabbling and the bills would really start to add up. There's potential for an intermediary that aggregates micropayments and gives a cut, individual subscription model etc. It's not working right now.

©WH. This work is licensed under a Creative Commons Attribution Non-Commercial Share-Alike England 2.0 Licence. Please attribute to WH, Tech and Law, and link to the original blog post page. Moral rights asserted.