The future of internet rights was discussed at the London School of Economics on 25 Jan 2010 by a distinguished panel from some of the best known names in the digital world -
- Facebook - Richard Allan (RA), Director of European Public Policy, also ex-Member of Parliament, formerly at Cisco.
- Google - Alma Whitten (AW), expert on the usability of security systems and Privacy Engineering Lead at Google (also Head of Google's Applied Security engineering team). [Note: she recently spoke for Google to policy makers in Brussels; her talk has now been uploaded to YouTube: Google on Internet Privacy]
- Open Insights - Usama M Fayyad (UF), CEO; formerly Chief Data Officer & Executive VP at Yahoo! [so the official bio page just linked to is out of date]
- Vodafone - Kasey Chappelle (KC), Global Privacy Counsel (formerly Global Privacy Counsel at eBay).
Issues discussed in this fascinating panel session included -
- government requests for information; and policy making
- innovation and privacy
- advertising, free content and business models for the internet age
- the cloud
- deletion of personal data, and
- consumer attitudes vs actions.
Questions from the floor included, why don't Google encrypt Gmail, the implications of personal data being around forever, etc.
There's an MP3 podcast of the discussion (about 92 mins) - podcast page; and (click the arrow to stream immediately) direct link to 43 MB MP3. (Would be good if Google were to release the Google Voice transcription technology or use it to transcribe and index MP3s like these!)
I've finally finished writing it up - based only on scrawled notes (not had time to listen through the podcast again), so while I believe I've captured the gist of it it's not guaranteed to be verbatim or even 100% correct or complete. Sometimes I've paraphrased for brevity or clarity and I've square bracketed bits I couldn't catch. If anyone thinks I've made any errors or not reflected anything properly, please let me know.
Note that I don't necessarily agree or disagree with what was said, I've just tried broadly to reflect the session accurately, without comment. I thought Dr Fayyad was particularly insightful though. I've added a few links and notes for ease of reference.
The panel was chaired by Gus Hosein (GH), of the Information Systems and Innovation Group, Department of Management at the LSE (he's also Policy Director at privacy NGO Privacy International, although he wasn't chairing the panel in that capacity).
He showed a slide of what he considered to be the main themes related to the social study of ICTs and innovation, namely (in no particular order):
- Design & development of the ICT artefact
- Information risk & security
- Global sourcing
- ICT globalisation and developing countries
- ICTs in public sphere
- Technical & organisational innovation.
Having displayed a slide with quotes from -
Security expert Bruce Schneier "And it's bad civic hygiene to build technologies that could someday be used to facilitate a police state." [Note: from his CNN article]
Iranian police commander General Esmail Ahmadi-Moghaddam “Attending illegal gatherings, rioting and insulting the sacred are reason for police reaction. Those who organize the gathering commit a bigger crime.” [Note: found it in BusinessWeek]
- he put a series of grouped questions to the panel.
2. Within the law
- How do you cooperate with government requests?
- How do you participate in the policy making process?
- Victims or deputies?
Some questions are better addressed by General Counsel so instead she focused on corporations' responsibility to engage in public debate and try to inform policy.
How search engines work and use data is rocket science to most people and for a public debate about that data the rocket science must be explained in such a way that it makes sense to policy makers, individuals, the media etc, and that kind of explanation and transparency must be built into the product.
This is one area where the internet is distinctly national rather than international. What really matters is where the data is, its home jurisdiction. Facebook data is held in the USA so US law prevails and Facebook has a responsibility to offer up the data if asked for it by lawful US authority, whereas it would have a legal problem if it gave it to Iran!
Another issue is, do you have bodies on the ground that countries can get to as well as data or computers they can take away? The UK authorities can use the Police & Criminal Evidence Act but the courts won't allow it if it can't be enforced because the data is kept outside the UK.
It's a real mess with laws and jurisdictions and where data lives. E.g. when he was with Yahoo the Chinese government asked Yahoo to surrender information regarding the use of email and at the time it wasn't clear what was wanted so they provided routine provisioning of data [?] on legal request. They had to obey because they had to operate within China's rules in order to have a presence in the country - have a bank account, charge, do business etc - and Yahoo wanted to do business there as did Microsoft and Google.
It's amazing that people don't pay much attention to the fact that in almost every country including the US, with the exception of a few Western countries, governments impose pretty draconian conditions, restrictions and demands on operators of big systems - ISPs, email providers, public info providers on the internet. US requirements especially post 9/11 are scary.
Why does that mess exist, why does every government say it has rights to see every email etc? The deeper reason is that the infrastructure, connectivity and lots of services are essentially "free" to consumers, it's frontier territory where anything goes. Contrast with a paid service where the position is understood, and relationships are with adults who can sign, etc.
We need debate to educate the public and lead to regulation that's rational.
Almost all the technology used today originally had military or state funded purposes. The problem is not the technology, but the lack of clarity on policy, The internet is more liberating than enslaving, a democratising force.
Vodafone is in many countries with people on the ground, regulated networks, easily garnished assets, so it's difficult to conduct a campaign of civil disobedience.
So they ensure that at a group level they provide advice and guidance to local operating companies to ensure they understand when they must comply and when there is room to push back - and sometimes they do push back. Even if they try to withdraw from a country, there'll be local competitors that may not have the motivation to be as protective of rights.
If Google pulled out, companies left behind with major consumer use may have worse control over content.
- What developments have been most influential in changing privacy?
- How are these forms of business regulated?
- Is privacy law inhibiting innovation or creating a safe space?
Profits are up at Google, Vodafone, Yahoo. The New York Times has a useful table comparing Google, Microsoft, Amazon and Yahoo. At least 3 of them are in the space of apps. There are privacy law challenges especially EU privacy law, which might inhibit innovation e.g. cloud computing, advertising, other business models.
Only 1 company from the panel is within the EU's jurisdiction i.e. Vodafone, who are launching an app store and have to follow rules which Facebook needn't in relation to apps.
The EU Data Protection Directive was enacted in 1995 and those laws may not necessarily be applied in the same way now, with apps that run on mobile devices from different companies who are not the network operator. The app developer could be in a garage. Who is the data controller, who is the data subject, who makes decisions on how information is used or shared? Data protection rights are the greatest challenge.
Google has strong ties with the open source community, often developing for the model where anyone can create code, add functionality, use, share, put it up on the internet, show ads and make money if successful, and continue to develop the functionality. It's difficult to have a clear dividing line between amateur and professional.
There are stories about Facebook apps, who develops them and how they're using the info. What about consumer protection?
Bear in mind that apps have their own jurisdiction - UK law if a UK company develops the app. The fundamental question we need to address is, data protection law was designed having in mind big organisations as data controllers and small citizens, and one to one transactions with a particular data subject in a particular jurisdiction.
But in the apps world there are multiple players in multiple jurisdictions and increasingly data controllers could be data subjects too. E.g. a photograph that someone doesn't like could be posted online controlled by a particular user using services from different providers - geotagging, maps, ads, browser, phone platform.
Privacy vs freedom of expression, rights of photographed vs rights of photographer - if someone objects to a photo do they go to Facebook, Picasa or Flickr to get it removed, or to the taker of the photo who is really the data controller, and deal with it at the source?
The law hasn't kept up with technology and when technology moves faster than legislation it's best not to interfere till you figure out what you want to interfere with. No government knows how they want to interfere.
There are 2 key principles here:
- Informed consent - the consumer should know and the info should be available and discoverable (it's a separate question whether government should force consumers to learn).
- Opt in - the consumer should positively say yes I do want you to track me. Consumers do this willingly with loyalty schemes, mileage points etc because there's a perceived value. If companies figure out how to explain the value, and the value is there (Google spends millions to ensure its index is fresh and complete), and in return for using a particular service the consumer will get particular rights, once we have a value exchange and the information is available we will have a clean equation. Once we have an opt in system, no reasonable government would interfere.
You'd need a separate panel session just on opt in or out!
It shouldn't be government regulators making the decision because how would they enforce it?
Possibly it's better to encapsulate basic principles already in the Data Protection Directive through Privacy By Design - ensure that when you build the technology certain principles that protect privacy rights are embedded in the technology. [Note: see my thoughts on PETs and posts on privacy enhancing technologies generally.]
The best way is through industry standards. Vodafone are working with the [PSMA? CTIA?] on developing guidelines for the wireless telecoms industry.
There first needs to be agreement on the meanings of "opt in", "opt out", etc, or you may regulate out basic protections against advertising spam, distributed denial of service attacks, and click fraud.
- Is advertising still key to a free internet?
- How will policy shape this space?
Previously it was thought that advertising will make everything free - in exchange for giving up privacy, get free services eg search on the basis that info can be used and processed for advertising purposes.
But recently the New York Times announced a move away from free model to a '90s subscription concept.
Google's always been excited and proud that it successfully enabled many individual sites to put up useful content for people and made that economically viable.
Now traditional industries are moving to internet models but it's not just a case of whether advertising supports quality content for free on the internet.
With the start of the internet something strange and transient happened and we're beginning to see it unwinding, i.e. the feeling that everything is free.
Think of it from an economic perspective. Google AdSense, Yahoo content match, Microsoft content advertising were designed to enable the public to put up ads on their sites and share revenue. Historically that's not worked to the advantage of the publishers partly because Google, Yahoo etc are more advanced in the technology and understand the medium better than older companies whose roots are in a different world.
If the New York Times are spending money to develop content, paying reporters to stay neutral and report for the sake of truth and trying to separate advertising from content, there's a cost to that.
Most of the value of news is in the headlines and if consumers have a way to get that for free without much effort, why should they be paying for it?
The New York Times's economic rationale is, they've seen newspapers fall apart because the model was insufficient to support them. The Wall Street Journal recognised this early on and while others embraced the free model the Wall Street Journal said only a few things would be free, the archive is available to subscribers only. And this became the strength of their online business, even over their offline business.
But someone needs to pay the piper or these things will go away and we'll all be worse off in a world without professionally generated content. We won't get movies on an advertising model, not within the next decade.
There will be a mix, the world will adjust, with less money for publishers and value in [?]
Facebook and Google share the trait of not making money from user subscriptions. How does Facebook make money?
It's the same but different, it's advertising 2.0. Traditional content models like newspapers, film, music are figuring it out using a mix e.g. Spotify.
Facebook, Google, Yahoo require massive infrastructures to deliver services to millions of users and it must be paid for somehow. The pure internet model typically involves advertising, it's free to users at the point of delivery and they figure out a way to make money matching people with products with people who might want to buy them, but becoming a fan rather than clicking through and paying for the connection to be made.
This is a sustainable model and consumers are comfortable with it when they get it e.g. if you like Starbucks get a connection to it for a free coffee.
Isn't mobile advertising more invasive?
Not if the advertising is relevant, if it's exactly what you're looking for.
It's a matter of debate how you get consent but people will choose to participate if they find it valuable. [Concrete rules on how to get consent, keep and use data etc.]
We're heading towards intelligent personal valets that understand our interests and act accordingly.
- Who owns information in the cloud?
- How will information be made portable across platforms?
Once your email was on a server relatively close to you, now more and more services are mixing into the cloud. Data is held and services are run somewhere you don't know where it is, Gmail & Hotmail are in the US but which states? Google Docs, whether personal or corporate, are on server farm in cloud.
This makes their position clear - it's your data and you can take it away. Google has always said that its business stands or falls on the trust of its users and building in this capability will keep Google honest. If users don't stay with Google it's not because it's too hard to leave but because they still trust it.
It's not that straightforward. There's been lots of discussions about content importers, transfer of data in and out. There are lots of commercial considerations which apply as to how easy to make it to integrate different services or not -
- technical - how easy to make it to move data - companies are discussing!
- who owns - more fundamental - during its TOSGate last year, Facebook argued it wasn't different from other sites but people were going "You're stealing my data!" Now Facebook's statement of rights & responsibilities says it's your data, you give Facebook a temporary licence to do things it needs to do to display it, share it etc, and you can withdraw it anytime.
- What is your view on when data should be deleted?
- Why do you keep info longer than consumers think?
3 years ago every search term you ever entered was kept indefinitely. But the retention period has started to go down. Recently Microsoft announced it would delete in 3 months, in buying a part of Yahoo - a record! [Note - can't find it, thought Microsoft recently promised 6 months.]
Facebook was designed by aspirational engineering students who never imagined anyone would want to leave, so it only enabled deactivation for students.
When at Yahoo he was behind driving search deletion. He took a pragmatic look at how long it was needed for and how it was used realistically as opposed to storing it in case it might possibly be useful one day. Having considered legal requirements etc he made an aggressive recommendation to delete the data after 6 months. It ended up being 13 months.
It's a complex and scary issue. AOL released queries "for research purposes" and anonymised unique user IDs and forgot that people like to search on their own names. Knowing names it was possible to look up addresses, other searches, stuff searched on other sites. It became a big disaster. His attitude is that it's a toxic asset he wants off his hands as soon as possible!
If using data to target ads, you care about consumer activities when doing commercial things, not about what library books they read but searches to buy a car, camera etc, which is less privacy sensitive than information based requirements.
It's complicated because there are 2 barriers to eliminating data immediately -
- legal - lots of governments including the US require retention of data for "law enforcement"
- contractual - Yahoo and Google in their business charge for clicks so advertisers may dispute whether clicks are by real users and want refunds ie click fraud. Because they bill advertisers they need to keep data to prove the clicks were by a real human
So i's more complicated than saying the data is yours. We must be careful about what we mean by "data" and what we mean by "yours".
In terms of anonymising data Google has different considerations and concerns from Facebook which she can't go into, which means they keep data for longer.
[Note: on what data Google's search engine collects and why they retain search logs for certain periods of time, now see her "Internet Privacy" speech to Brussels policy makers on YouTube.]
He believes companies are competing to reduce the retention period and Google seems slowest with Yahoo at 13 months, Microsoft reducing to 13, Google 14 now 13, then Microsoft saying it will be 6, or 3. Google deletes IP after 9 months, cookies after 18 months. Microsoft deletes cookies after 18 months, 6 months for IP. [Note: found the Microsoft chart for Bing privacy practices.]
The most competition is in what search engine and which search results people find most useful. From initial investigations it's a best faith judgement call on the costs/benefits of certain retention periods and Google settled on 9 months
7. Attitudes vs Actions
- Do consumers really care about privacy?
- Polls tend.to say yes, but actions...
There's the Facebook controversy reported in the Telegraph, "privacy is no longer a social norm" i.e. Mark Zuckerberg saying that people are more comfortable with sharing more and more info.
Things have changed because of the internet, it's the greatest copying machine ever invented and that includes stuff about us and we have to get used to that whether we like it or not.
Much depends on how you ask the questions, you could make a scary thing out of it or trust that most people are pretty smart and happy with the deal.
Lots of issues on the deal of data for services depend on the power relation between you and the entity collecting the data, it would be different if it was the Home Office!
The public do understand and demonstrate in the way they behave that they understand the deal that they're getting. Facebook is moving with the trend, and believes it's significant.
Conversely you could say that the public are stupid and we must intervene because they're making bad choices and they're too stupid to make good choices.
The vast majority enjoy internet services and understand the reciprocity in providing access to a certain amount of info about them.
Pre-Google when she was a security usability and human factors person, people doing psychological user-focused research told her they'd heard from users that they were afraid of being stalked online, of accidentally revealing info that would allow them to be stalked.
When security people told her that people don't care about security, she said a security concern was stalking - and the security researchers said, what's that got to do with security?!
The privacy discourse still isn't there yet. The privacy frameworks being developed may not match up with what people actually do.
What's changed isn't so much social as the degree of control over info, so people may react differently to choices. If you give them tools they may or may not use them to protect privacy and security. But lots of consumers say the fact that they have the ability to control, changes the way they interact.
He agrees; he's not sure people's habits have changed. There's a great degree of consumer ignorance, they lack experience because the medium is new and they don't think about what happens when they do Facebook updates. He's stopped saying "Heading to London" because people get upset that he hasn't had time to see them all!
At Yahoo he tried to help consumers remove false references to them, articles claiming they did something bad etc. It's a nightmare to try to get Google, Microsoft, Yahoo etc to purge such things from search engine results.
We need awareness, consumers need to understand what could happen to data, the danger of disclosing to the wrong crowd at the wrong time. People can copy Facebook pictures so it's available elsewhere even if you delete it.
It's like cars, very dangerous tools but highly regulated - observe traffic lights, you can't drive on the sidewalk.
Before regulating you need awareness and understanding and it's too early, it's a wild frontier.
If consumers aren't aware, companies aren't doing a good job of talking to users.
8. Questions from audience
SSL was added to Gmail to protect mail between consumers and their servers, why doesn't Google support encryption so Google itself can't see the content stored with them?
There's nothing to prevent use of Gmail in that way, there are third party solutions which do PGP for you. The problem is management of encryption keys shared between individuals. We're not there yet in terms of making it manageable for the vast majority of Gmail users.
Also many services try to add value eg searching Gmail, translation etc and Google wouldn't be able to provide them if Google's servers couldn't work with the data.
Question: What if someone from the generation which started using the internet before realising the ramifications runs for office?
In 1992 Clinton's marijuana use was a big deal. Then Bush was silent about his drug use. Now the current President in his book admits to experimenting with cocaine. She believes ultimately we'll end up with the same environment with records online. Everyone's got that and won't be disqualified for it because if so everyone's disqualified.
France is considering a "right of oblivion", whether there should be a right to be obliterated from the internet. A recruitment code of conduct [which?] says not to look at Facebook. [Note - recruiters clearly aren't paying attention to their code then! Microsoft recently released for Data Privacy Day 2010 a research study indicating that 79% of US recruiters do check online info on job applicants.]
Clay Shirky talks about private conversation in public space - we still need to treat it as private even though it takes place in a public space.
Is civic hygiene increased because we have to be more honest? That sweaty feeling when the News of the World rings you up on a Friday and you think "Oh my god the photos have come out" - now you don't have to worry because they're out!
Some believe openness is good, previously they'd send someone to kill a dissident, now there's too many people to kill, there's too many drug users in politics to ban drug users in politics.
What about global jurisdiction issues, an international organisation?
The internet is based on private law and private commercial legal arrangements with a layer of criminal law on top. There's no international organisation.
The US internet industry lobbied against it at ITU, UN, intergovernmental level. Things work OK as they are now, with a fast moving space you don't want to introduce a complex global structure on top of it.
The threat of regulation by national government has led to self regulation e.g. search engines dispensing with historic data because of the EU threat, and we'll end up in the same place ie healthy competition from commercial companies to move into line.
With multinational US based companies subject to EU regulation we'll see equivalent privacy protections in the next couple of years; US companies are promoting it. APEC, [FIP?] etc are similarly levelling. The move to industry standards and practices, developer guidelines etc will effectively create global standards.
We can have global standards but what about international enforcement? We're seeing healthy moves on the privacy front by many governments.
We've swung too far to a world with no more privacy, Mark Zuckerberg's statement etc, and he believes there will be a backlash and some bad experiences of individuals and groups will swing it in the other direction.
There's a big fear that government may overreact early on especially with advertising before they understand the consequences and may prevent some innovations from happening. He prefers innovation to go through and reach the next level.
What about newspapers, citizen journalism and blogging, headlines?
The business model is not quite working but they don't have an alternative business model. There IS value to professional opinions, it's not just about the headline. Everyone's free to say anything and that's part of the problem.
Long term there are professional forums etc and we'll tend to trust more people who have more to lose.
Time after time people have rejected micropayments. People consume media by dabbling and the bills would really start to add up. There's potential for an intermediary that aggregates micropayments and gives a cut, individual subscription model etc. It's not working right now.
©WH. This work is licensed under a Creative Commons Attribution Non-Commercial Share-Alike England 2.0 Licence. Please attribute to WH, Tech and Law, and link to the original blog post page. Moral rights asserted.