An article by Google’s Senior Policy Counsel Nicklas Lundblad and Policy Manager Betsy Masiello “Opt-in dystopias” (PDF) in the SCRIPTed journal "examines the possible consequences of mandatory opt-in policies for application service providers on the Internet. Our claim is that focusing the privacy debate on the opt-in / opt-out dichotomy creates false choices for end users. Instead, we argue for a structure in which providers are encouraged to create ongoing negotiations with their users."
Their conclusions, with my comments in italics -
"We have argued that mandatory opt-in applied across contexts of information collection is poised to have several unintended consequences on social welfare and individual privacy:
Dual cost structure: Opt-in is necessarily a partially informed decision because users lack experience with the service and value it provides until after opting-in. Potential costs of the opt-in decision loom larger than potential benefits, whereas potential benefits of the opt-out decision loom larger than potential costs.
[Yes, whether you make it opt in or opt out does matter - see further on engineered consent and human psychology.]
Excessive scope: Under an opt-in regime, the provider has an incentive to exaggerate the scope of what he asks for, while under the opt-out regime the provider has an incentive to allow for feature-by-feature opt-out.
[Yes. I've always felt this to be the case in relation to opt in, and again see the notes on engineered consent.]
Desensitisation: If everyone requires opt-in to use services, users will be desensitised to the choice, resulting in automatic opt-in.
[Point taken about desensitisation. Although Commissioner Reding seems to be favouring banning pre-ticked boxes, at least on the consumer front, and I think there will be less automatic opt-in if boxes aren't pre-checked. Also note the view in the Article 29 Working Party Future of Privacy paper that "consent is an inappropriate ground for processing"]
Balkanisation: The increase in switching costs presented by opt-in decisions is likely to lead to proliferation of walled gardens.
[I'm not sure about this, personally.]
We have laid the initial foundation for thinking about opt-out regimes as repeated negotiations between users and service providers. This framework may suggest implementations of opt-out be designed to allow for these repeated negotiations and even optimise for them. We recognise that there may be contexts in which mandatory opt-in is the optimal policy for individual privacy as, for example, when the information in question is particularly sensitive. In subsequent work, the authors intend to propose a framework in which opt-out creates not only a viable but in many cases an optimal architecture for privacy online and to explore the contexts in which implementing opt-in is the optimal privacy architecture."
[A "repeated negotiations" approach is certainly one possibility for privacy by design, but it can suffer from similar disadvantages as now e.g. desensitisation. Any technological framework won't be easy to design and get working properly, and it certainly won't be effective even then unless all services can be made to have and use it - so I await their promised subsequent work with interest.]
See also the work of EnCore on the technical management of consents, which might fit in with the "ongoing negotiation" approach.
©WH. This work is licensed under a Creative Commons Attribution Non-Commercial Share-Alike England 2.0 Licence. Please attribute to WH, Tech and Law, and link to the original blog post page. Moral rights asserted.