Scroll down.

On Jun 4, 2017, at 5:45 PM, James Hazard <james.g.hazard@gmail.com> wrote:

There seem to be a number of threads spawned, with apologies, this misses some of the conversation. 

And forgive me for spawning one of those.

If we handle smart contracts wisely, as Ricardian "code", then "smart contracts" can be quite dumb, dumb enough to be intelligible.  Really just granular, open source versions of the kind of code that currently animates interactions on proprietary websites at banks and web merchants.  Schedule a payment, confirm receipt, etc.  Since most interactions follow common patterns, these bits of code can probably be reused a lot.  For instance, notification interactions can be used across a broad range of transaction types.  Same with payments and I presume with access control.  

Perhaps these shouldn't be called smart contracts, indeed the phrase should be totally banned because even the "self-executing" ones aren't contracts.  We could call them "dry code" in distinction to "wet code," or "code" versus "prose" in the Ricardian vocabulary. 

Yes.

There is, however, a substantial community that insists on "smart contract.” 

Yes as well.

Similarly, P2P means the relationships rather than a technology.  Email is a P2P technology because everyone has the same data model.  Most won't self-host, but they could, they can take their data with them, and the vendor lock is light-weight.  The point is that a common format and semantics greatly reduces the power of hubs.  The usual open source dynamic.

Yes. We want more of that.

With respect to leverage, I don't think we need to rely on individuals to assert a P2P model.

If individuals are not relied on, we lose something. I suspect we lose a lot more than we already have.

The individual (I am avoiding the word “user” or “consumer” here) needa to experience agency. There are, alas, a paucity of tools and services that give the individual agency. How can we relieve at least some of that?

Protective policy, good as it is to have, also tends to make a power asymmetry official. We need to do more than rely on that.

The GDPR and PSD2 both work against hubs.  As far as I can see, the way that groups in Europe, even whole countries, will bring control and data close to home is with a P2P data model.  PDS.

I hope home here is individual agency.

Further to Jeff's suggestion, if the policies are assembled from prose with known provenance and presented in structured form, then machine readability is easy.

Whose machines are we talking about here (and forgive me for not knowing). The individual’s, or the institution’s?

That also encourages codification in the legal sense, and makes it easier to get the full prose supply chain involved in collaboration - legislator, regulator, trade group, company, citizen.  A thought-piece in which the GDPR is refactored to make a faux privacy policy.

How about a real privacy contract, proffered by the individual as a first party, to which the institution agrees as a second party?

Here is some of what I’ve written about this, in reverse chronological order:

http://bit.ly/cstmrs1st
http://bit.ly/1stprtytrms
http://j.mp/cstledoc
http://j.mp/n0stalkng
http://bit.ly/dbg1ln
http://j.mp/adranch

All those are about what we’re doing with Customer Commons. And we’re looking for help with it. Maybe the original paper (Open Algorithms for Identity Federation) can do some of that. I haven’t dug into it very far yet. I believe UMA does, though I’m still not sure.

As for—

http://source.commonaccord.org/index.php?action=source&file=G/EU-GDPR-Law-CmA/Demo/Acme_UK.md 

—I’m wondering if this kind of thing can go in an http header that nods its head toward the GDPR and also points to first party terms in Customer Commons that will be agreeable to sites that wish to be GDPR compliant—and will be, because they agree as second parties to the individual’s terms, and that agreement is recorded in some way (e.g. jlinc).

Doc

On Sun, Jun 4, 2017 at 2:34 PM, Doc Searls <dsearls@cyber.law.harvard.edu> wrote:
Could such a thing (or things) be located at Customer Commons as one arrow (or set of arrows) in a quiver of tools at the individual’s disposal?

Doc

On Jun 4, 2017, at 4:36 PM, j stollman <stollman.j@gmail.com> wrote:

To Mark's comment regarding machine readable privacy policies, I did develop the high level design for a system to analyze privacy policies (as well as Terms of Service and other "standard" contracts).  A user of the system would define his privacy preferences in a template.  Using Information Extraction (a form of AI), the system then reviewed the Privacy Policy and interpreted its meaning vis a vis each user preference.  It then reported the disparities.  The user was not bound to enforce his preferences.  The idea was to begin letting people know how egregious are many of the policies that they agree to without reading as a first step in trying to create a competitive market for such policies.  Knowing the one site has a policy preferable to a similar site might begin to drive firms to create more appealing policies.  But, as long as we remain in the dark about what we are signing up for, there is limited incentive for sites to improve.

To Scott's comment about preventing the alteration of contract terms after they are agreed to, one possibility is to add the agreed-to contract to a blockchain.  In this way, any alteration other than a mutually agreed-to amendment would be outed by the consensus mechanism builtin to the blockchain.

Jeff


---------------------------------
Jeff Stollman
+1 202.683.8699


Truth never triumphs — its opponents just die out.
Science advances one funeral at a time.
                                    Max Planck

On Sun, Jun 4, 2017 at 2:15 PM, John Wunderlich <john@wunderlich.ca> wrote:
Thomas;

It seems that part of the conceptual underpinning of this is that there will be pools of what you call “RAW data” under the control of one entity or another. Presumably, given GDPR, these pools will derive authority from consent or an allowable derogation. To the extent that we want to build privacy protective systems on top of a non-privacy protective infrastructure this makes sense and is a step away from the risks and abuses we have all seen.

That being said, I wonder what the potentials are for pools of algorithms instead of pools of data. Such algorithms could make use of individuals’ data in situ as it were - perhaps querying resource servers using UMA or by linking particular algorithms to dynamically negotiated information sharing agreements. In both of these cases there is no need of a trust entity because control of the data is retained by the individual. It’s a different category of algorithmic problem, but it does seem to me to scale in a manner similar to the Internet itself and it bypasses the risk endemic in creating yoodge pools of RAW data.

Sincerely,

John Wunderlich
(@PrivacyCDN)

On Jun 4, 2017, at 10:07, Thomas Hardjono <hardjono@mit.edu> wrote:


Thanks Jim,

So in the paper I purposely omitted any mention of smart-contracts (too distracting).

We have a small project on how to make the "algorithm" (think simple SQL statement) into a smart-contract (think Ethreum).

The algorithm-smart-contract is triggered by the caller (querier) and it has to be parameterized (e.g. input the public keys of the querier and the data-repository; payments, etc).

So this is pointing towards a future model for data-markets, where these algorithm-smart-contracts are available on many node of the P2P network, and anyone can use them (with payment of course).

Not to be too hyperbolic, but think of futuristic "AI and bots" that make use of these various algorithm-smart-contracts.

/thomas/



________________________________________
From: wg-uma-bounces@kantarainitiative.org [wg-uma-bounces@kantarainitiative.org] on behalf of James Hazard [james.g.hazard@gmail.com]
Sent: Sunday, June 04, 2017 9:31 AM
To: Adrian Gropper
Cc: wg-uma@kantarainitiative.org; eve.maler@forgerock.com; hardjono@media.mit.edu
Subject: Re: [WG-UMA] New paper on Identity, Data and next-gen Federation

Great to see this discussion.

Some time ago, I did a demo of the sequence of events in writing and clearing a paper check - right at the boundary between a contract and a payment.  It shows each step as a record that references other records.  Some of the other records define the meaning of a step, in both text and automation.  The automation is expressed in (fake) granular bits of code, referenced by their hash.

This would allow curation of granular bits of automation ("smart contracts" in a broad sense). Those could be validated by an organization or standards body.

The demo was made with the pending EU PSD2 in mind, as a way for financial institutions to collaborate on APIs.  But the principle is broadly applicable to transacting in general.

http://www.commonaccord.org/index.php?action=doc&file=bqc/fr/bnpp/a5we/Account/Check/00001/06-Accept.md
(Click on "Source" and follow links.)



On Sun, Jun 4, 2017 at 5:33 AM, Adrian Gropper <agropper@healthurl.com<mailto:agropper@healthurl.com>> wrote:
Please, let's avoid applying the word farming to people. The Matrix will be upon us soon enough.

Adrian

On Sun, Jun 4, 2017 at 8:24 AM, Mark Lizar <mark@openconsent.com<mailto:mark@openconsent.com>> wrote:
Trust Farmer,  what a great term !

Use of RAW personal data is clearly a barrier for trusted service development and this makes a lot of sense.

OPAL provides an economic, high value information argument.  It also helps to illuminate a landscape for competitive service development with personal data that people control or co-manage.    (Which is what I like the most:)

- Mark


On 4 Jun 2017, at 02:44, Thomas Hardjono <hardjono@mit.edu<mailto:hardjono@mit.edu>> wrote:


Thanks Mark,

An easy way to illustrate the "algorithm" is to think of an SQL statement (e.g. "compute average income of people living in Cambridge MA").  I send you the SQL statement, then you compute it in your back-end data repo (behind you firewalls), and then return the result to me.

Assuming a community of Data Providers could get into a consortium governed by a trust farmer, the could collectively come-up with say 20 of these SQL queries (vetted of course).

The point of the paper is that the barrier to sharing data (raw data) is getting impossible to overcome (think GDPR), and if data-rich institutions (i.e. Banks) want to play in the identity space by monetizing their data then OPAL provides a practical/palatable approach.

From the consent side, the user needs the ability to say:  "I know my data is part of data-repo-X, and I give consent for algorithm A to be executed on data-repo-X".

The data-repository also needs a recipe to prove the use had given consent.

/thomas/




________________________________________
From: Mark Lizar [mark@openconsent.com<mailto:mark@openconsent.com>]
Sent: Saturday, June 03, 2017 4:09 PM
To: Thomas Hardjono
Cc: wg-uma@kantarainitiative.org<mailto:wg-uma@kantarainitiative.org>; eve.maler@forgerock.com<mailto:eve.maler@forgerock.com>; hardjono@media.mit.edu<mailto:hardjono@media.mit.edu>
Subject: Re: New paper on Identity, Data and next-gen Federation

Hi Thomas,

You made quick work of this wickedly hard problem :-). (To run with this a bit)

This looks a lot like the consent to authorise pattern we have been discussing.  Which I would define as the :

1.  -  purpose specification
2.  -  to consent permission scopes
3.  - to privacy policy model clauses
4.  - to UMA
5.  - to contract policy clauses
6.  - to permission
7.  - to user control

To make this sort of thing happen I have been working on the premise that a machine readable privacy policy, is configured with a purpose category that is defined with preference scopes (or a consent type that defined scopes and maybe also preference options) which then are associated with model privacy policy clauses.

This then boils down into a  consent to authorise privacy policy scope profile for UMA access, which would then be used to defines the permission scopes and the associated contract model clauses that enable people to manage and control their own information.

At which point,  the data subject could bring to the party their own license, which provides the model clauses, which match the aforementioned policies and defines how the preferences are set and managed.

The whole policy model will link with the permission scopes and preferences to basically sort out all the old school policy issues that are gumming up the works currently.

With the above framework in place,

The algorithms could be defined by the purpose category (i.e. industry) configured by the consent to authorise profile,  and then controlled by the individual with model clauses that delegate to trusted third party applications.  This provides the higher order transparency and accountability needed - or perhaps ethics - which the user is ultimately the master controller of via a data services provider.

It is conceivable that the user could bring their own algorithims, or have algorithims that police algorithims which is reminiscent of the original cop monkey pattern (if I am not mistaken)



- Mark


On 2 Jun 2017, at 16:03, Thomas Hardjono <hardjono@mit.edu<mailto:hardjono@mit.edu><mailto:hardjono@mit.edu<mailto:hardjono@mit.edu>>> wrote:


Eve, Mark, UMA folks,

This new paper (PDF) might be of some use in framing-up the next level of discussions regarding "identity" and "data" and how to"federate data".

Its permanent link is here:

http://arxiv.org/abs/1705.10880

I'm thinking that the Claims-gathering flows in UMA and also the Consent-Receipts flows could use an "algorithm-identifier" value, effectively stating that "Alice consents to Algorithms X be run against her data set Y" (where the data-set lies in her private Resource Server).


Best.

/thomas/
<open-algorithms-identity-federation-1705.10880.pdf>

_______________________________________________
WG-UMA mailing list
WG-UMA@kantarainitiative.org<mailto:WG-UMA@kantarainitiative.org>
http://kantarainitiative.org/mailman/listinfo/wg-uma



--

Adrian Gropper MD

PROTECT YOUR FUTURE - RESTORE Health Privacy!
HELP us fight for the right to control personal health data.

_______________________________________________
WG-UMA mailing list
WG-UMA@kantarainitiative.org<mailto:WG-UMA@kantarainitiative.org>
http://kantarainitiative.org/mailman/listinfo/wg-uma




--
@commonaccord
_______________________________________________
WG-UMA mailing list
WG-UMA@kantarainitiative.org
http://kantarainitiative.org/mailman/listinfo/wg-uma



This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify the system manager. This message contains confidential information and is intended only for the individual named. If you are not the named addressee you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately by e-mail if you have received this e-mail by mistake and delete this e-mail from your system. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited.

_______________________________________________
WG-UMA mailing list
WG-UMA@kantarainitiative.org
http://kantarainitiative.org/mailman/listinfo/wg-uma


_______________________________________________
WG-UMA mailing list
WG-UMA@kantarainitiative.org
http://kantarainitiative.org/mailman/listinfo/wg-uma


_______________________________________________
WG-UMA mailing list
WG-UMA@kantarainitiative.org
http://kantarainitiative.org/mailman/listinfo/wg-uma




--
@commonaccord