I do not dispute this, but you should not ask for the impossible: "What I don’t want is any organisation, public or private, passing it on without my knowledge or consent.". You cannot prevent this. You can penalise people who do, but you can't prevent it.
I believe that we should ask for the impossible, as it is the only way to know the limits of the possible... I therefore believe that 1) we can prevent the passing of personal data without our (prior) knowledge or consent 2) we should not be able to prevent the passing of personal data without our (prior) knowledge or consent Well, that doesn't seem very consistent, but isn't consistency one of the lowest forms of intelligence... Preventing: in a trust architecture where all data is at one place and one place only, by default, the passing of data being done through pointers to the source and not a through a copy, only trusted parties can access the source through policy enforcement points (PEP). If A trusts B and not C, and if B gives C a pointer to A data, C won't be able to read it. Of course, B can take a screenshot and send it to C but then A can pretend that it is a fake, exclude B from his/her circle of trust. End of story. In such an architecture, it is the right to access the source that gives credibility to the claim. So, even if someone sends unauthorised data, a trust architecture provides the conditions for plausible deniability —or file a legal complain based on the violation of one's personal data policy. Not preventing: there are some claims in relation to personal data control that, if implemented, would be equivalent to *digital lobotomy*, or to *my very personal big brother*. If I send (a pointer to) a personal photo to a friend, and if tomorrow I decide that I don't like this photo anymore and erase it from my computer, I have no right to force my friend to forget about the photo if she made a copy in a personal album. I can ask her, but I can't force her. Ibid. if this friend wants to send it to one of her friends that I don't know or even don't trust: she can of course take a screenshot of the photo (or make a copy, depending on my policies) and send a link to this screenshot whithout having to ask me for any form of authorisation. My friends have a right to their own intimacy, so I don't need/want to control everything they do with the information I provide them with. Making the information public would be another matter, but there are circumstances where this would be perfectly legitimate as well. One possible solution to the prevention/non-prevention dilemma is a better understanding of the nature of identity. And to understand it, we need to get away from the confusion generated by ICT engineers and policy makers for whom identity=identifier. OpenID, SAML etc. are only interested in the identification *of* people and do not address the issue of identification *to* (and against) people, which is central to the process of identity construction. We also need to get away from the confusion between personal data and identity: one should be able to fully control one's personal data, but there is no way one can fully control one's identity: identity is a social construction which includes self-identity (Giddens) and identity through others (Laing). Someone's identity can't be isolated within a set of attributes under one's control. Identity is social, hence distributed in a network of trusted and not-trusted relationships —the people and organisations we do not trust also contribute to our identity. *Identity theft* is a misnomer as it is often nothing more than *identifier theft* —full identity theft, like the one described in Despair, a Nabokov's novel, is rare. To steal one's identity would require stealing all social relationships within the boundaries of that identity —and one has multiple identities, as parent, employee, customer, drag queen, accountant, entrepreneur, liberal, etc. Authentication and authorisation processes that would exploit the multi-dimentional properties of one's identity could provide more reliable outcomes than those based on simple, mono-dimentional identifiers... In such a world, everyone becomes an identity provider, and today's specialised identity providers (IDP) such as national ID providers are one among millions in a 'society of IPDs', or what I call an 'Internet of Subjects.' It should not be difficult to compute one's social surface, the trustworthiness of the 'surface' in relation to other known 'surfaces', one's worthiness within that surface, etc. Identity, like reputation, is contextual. Being distrusted by one group could be a good indicator that one should be trusted by another group (the enemies of...), so we need to support positive as well as negative identification —and double negative... This could be achieved by an identity centric internet (ICI) based on a clear functional separation between storage of personal data, under personal control, and the services creating/exploiting it. We need to achieve with personal data what we have just started with public data: free them from the application/service/organisational silos to put an end to the ever increasing fragmentation of our personal data. We need to call for the abolition of personal data slavery. Serge Ravet PS: a first step towards the separation of personal data from services could be the call for the split of Facebook into Baby Faces, like what was done with the split of Bell into Baby Bells in the 80s... Free our Data Now! Support the Internet of Subjects Manifesto! ---------------------------------------- tel +33 3 8643 1343 mob +33 6 0768 6727 Skype szerge www.iosf.org www.eife-l.org ----------------------------------------