New paper on Identity, Data and next-gen Federation

Eve, Mark, UMA folks, This new paper (PDF) might be of some use in framing-up the next level of discussions regarding "identity" and "data" and how to"federate data". Its permanent link is here: http://arxiv.org/abs/1705.10880 I'm thinking that the Claims-gathering flows in UMA and also the Consent-Receipts flows could use an "algorithm-identifier" value, effectively stating that "Alice consents to Algorithms X be run against her data set Y" (where the data-set lies in her private Resource Server). Best. /thomas/

Is the data Y in Alice's private Resource Server signed so the algorithm X can verify authenticity of data Y? Adrian On Fri, Jun 2, 2017 at 11:04 AM Thomas Hardjono <hardjono@mit.edu> wrote:
Eve, Mark, UMA folks,
This new paper (PDF) might be of some use in framing-up the next level of discussions regarding "identity" and "data" and how to"federate data".
Its permanent link is here:
http://arxiv.org/abs/1705.10880
I'm thinking that the Claims-gathering flows in UMA and also the Consent-Receipts flows could use an "algorithm-identifier" value, effectively stating that "Alice consents to Algorithms X be run against her data set Y" (where the data-set lies in her private Resource Server).
Best.
/thomas/ _______________________________________________ WG-UMA mailing list WG-UMA@kantarainitiative.org http://kantarainitiative.org/mailman/listinfo/wg-uma
-- Adrian Gropper MD PROTECT YOUR FUTURE - RESTORE Health Privacy! HELP us fight for the right to control personal health data.

Hi Thomas, You made quick work of this wickedly hard problem :-). (To run with this a bit) This looks a lot like the consent to authorise pattern we have been discussing. Which I would define as the : - purpose specification - to consent permission scopes - to privacy policy model clauses - to UMA - to contract policy clauses - to permission - to user control To make this sort of thing happen I have been working on the premise that a machine readable privacy policy, is configured with a purpose category that is defined with preference scopes (or a consent type that defined scopes and maybe also preference options) which then are associated with model privacy policy clauses. This then boils down into a consent to authorise privacy policy scope profile for UMA access, which would then be used to defines the permission scopes and the associated contract model clauses that enable people to manage and control their own information. At which point, the data subject could bring to the party their own license, which provides the model clauses, which match the aforementioned policies and defines how the preferences are set and managed. The whole policy model will link with the permission scopes and preferences to basically sort out all the old school policy issues that are gumming up the works currently. With the above framework in place, The algorithms could be defined by the purpose category (i.e. industry) configured by the consent to authorise profile, and then controlled by the individual with model clauses that delegate to trusted third party applications. This provides the higher order transparency and accountability needed - or perhaps ethics - which the user is ultimately the master controller of via a data services provider. It is conceivable that the user could bring their own algorithims, or have algorithims that police algorithims which is reminiscent of the original cop monkey pattern (if I am not mistaken) - Mark
On 2 Jun 2017, at 16:03, Thomas Hardjono <hardjono@mit.edu> wrote:
Eve, Mark, UMA folks,
This new paper (PDF) might be of some use in framing-up the next level of discussions regarding "identity" and "data" and how to"federate data".
Its permanent link is here:
http://arxiv.org/abs/1705.10880
I'm thinking that the Claims-gathering flows in UMA and also the Consent-Receipts flows could use an "algorithm-identifier" value, effectively stating that "Alice consents to Algorithms X be run against her data set Y" (where the data-set lies in her private Resource Server).
Best.
/thomas/ <open-algorithms-identity-federation-1705.10880.pdf>

Thanks Mark, An easy way to illustrate the "algorithm" is to think of an SQL statement (e.g. "compute average income of people living in Cambridge MA"). I send you the SQL statement, then you compute it in your back-end data repo (behind you firewalls), and then return the result to me. Assuming a community of Data Providers could get into a consortium governed by a trust farmer, the could collectively come-up with say 20 of these SQL queries (vetted of course). The point of the paper is that the barrier to sharing data (raw data) is getting impossible to overcome (think GDPR), and if data-rich institutions (i.e. Banks) want to play in the identity space by monetizing their data then OPAL provides a practical/palatable approach.
From the consent side, the user needs the ability to say: "I know my data is part of data-repo-X, and I give consent for algorithm A to be executed on data-repo-X".
The data-repository also needs a recipe to prove the use had given consent. /thomas/ ________________________________________ From: Mark Lizar [mark@openconsent.com] Sent: Saturday, June 03, 2017 4:09 PM To: Thomas Hardjono Cc: wg-uma@kantarainitiative.org; eve.maler@forgerock.com; hardjono@media.mit.edu Subject: Re: New paper on Identity, Data and next-gen Federation Hi Thomas, You made quick work of this wickedly hard problem :-). (To run with this a bit) This looks a lot like the consent to authorise pattern we have been discussing. Which I would define as the : 1. - purpose specification 2. - to consent permission scopes 3. - to privacy policy model clauses 4. - to UMA 5. - to contract policy clauses 6. - to permission 7. - to user control To make this sort of thing happen I have been working on the premise that a machine readable privacy policy, is configured with a purpose category that is defined with preference scopes (or a consent type that defined scopes and maybe also preference options) which then are associated with model privacy policy clauses. This then boils down into a consent to authorise privacy policy scope profile for UMA access, which would then be used to defines the permission scopes and the associated contract model clauses that enable people to manage and control their own information. At which point, the data subject could bring to the party their own license, which provides the model clauses, which match the aforementioned policies and defines how the preferences are set and managed. The whole policy model will link with the permission scopes and preferences to basically sort out all the old school policy issues that are gumming up the works currently. With the above framework in place, The algorithms could be defined by the purpose category (i.e. industry) configured by the consent to authorise profile, and then controlled by the individual with model clauses that delegate to trusted third party applications. This provides the higher order transparency and accountability needed - or perhaps ethics - which the user is ultimately the master controller of via a data services provider. It is conceivable that the user could bring their own algorithims, or have algorithims that police algorithims which is reminiscent of the original cop monkey pattern (if I am not mistaken) - Mark On 2 Jun 2017, at 16:03, Thomas Hardjono <hardjono@mit.edu<mailto:hardjono@mit.edu>> wrote: Eve, Mark, UMA folks, This new paper (PDF) might be of some use in framing-up the next level of discussions regarding "identity" and "data" and how to"federate data". Its permanent link is here: http://arxiv.org/abs/1705.10880 I'm thinking that the Claims-gathering flows in UMA and also the Consent-Receipts flows could use an "algorithm-identifier" value, effectively stating that "Alice consents to Algorithms X be run against her data set Y" (where the data-set lies in her private Resource Server). Best. /thomas/ <open-algorithms-identity-federation-1705.10880.pdf>

Trust Farmer, what a great term ! Use of RAW personal data is clearly a barrier for trusted service development and this makes a lot of sense. OPAL provides an economic, high value information argument. It also helps to illuminate a landscape for competitive service development with personal data that people control or co-manage. (Which is what I like the most:) - Mark
On 4 Jun 2017, at 02:44, Thomas Hardjono <hardjono@mit.edu> wrote:
Thanks Mark,
An easy way to illustrate the "algorithm" is to think of an SQL statement (e.g. "compute average income of people living in Cambridge MA"). I send you the SQL statement, then you compute it in your back-end data repo (behind you firewalls), and then return the result to me.
Assuming a community of Data Providers could get into a consortium governed by a trust farmer, the could collectively come-up with say 20 of these SQL queries (vetted of course).
The point of the paper is that the barrier to sharing data (raw data) is getting impossible to overcome (think GDPR), and if data-rich institutions (i.e. Banks) want to play in the identity space by monetizing their data then OPAL provides a practical/palatable approach.
From the consent side, the user needs the ability to say: "I know my data is part of data-repo-X, and I give consent for algorithm A to be executed on data-repo-X".
The data-repository also needs a recipe to prove the use had given consent.
/thomas/
________________________________________ From: Mark Lizar [mark@openconsent.com] Sent: Saturday, June 03, 2017 4:09 PM To: Thomas Hardjono Cc: wg-uma@kantarainitiative.org; eve.maler@forgerock.com; hardjono@media.mit.edu Subject: Re: New paper on Identity, Data and next-gen Federation
Hi Thomas,
You made quick work of this wickedly hard problem :-). (To run with this a bit)
This looks a lot like the consent to authorise pattern we have been discussing. Which I would define as the :
1. - purpose specification 2. - to consent permission scopes 3. - to privacy policy model clauses 4. - to UMA 5. - to contract policy clauses 6. - to permission 7. - to user control
To make this sort of thing happen I have been working on the premise that a machine readable privacy policy, is configured with a purpose category that is defined with preference scopes (or a consent type that defined scopes and maybe also preference options) which then are associated with model privacy policy clauses.
This then boils down into a consent to authorise privacy policy scope profile for UMA access, which would then be used to defines the permission scopes and the associated contract model clauses that enable people to manage and control their own information.
At which point, the data subject could bring to the party their own license, which provides the model clauses, which match the aforementioned policies and defines how the preferences are set and managed.
The whole policy model will link with the permission scopes and preferences to basically sort out all the old school policy issues that are gumming up the works currently.
With the above framework in place,
The algorithms could be defined by the purpose category (i.e. industry) configured by the consent to authorise profile, and then controlled by the individual with model clauses that delegate to trusted third party applications. This provides the higher order transparency and accountability needed - or perhaps ethics - which the user is ultimately the master controller of via a data services provider.
It is conceivable that the user could bring their own algorithims, or have algorithims that police algorithims which is reminiscent of the original cop monkey pattern (if I am not mistaken)
- Mark
On 2 Jun 2017, at 16:03, Thomas Hardjono <hardjono@mit.edu<mailto:hardjono@mit.edu>> wrote:
Eve, Mark, UMA folks,
This new paper (PDF) might be of some use in framing-up the next level of discussions regarding "identity" and "data" and how to"federate data".
Its permanent link is here:
http://arxiv.org/abs/1705.10880
I'm thinking that the Claims-gathering flows in UMA and also the Consent-Receipts flows could use an "algorithm-identifier" value, effectively stating that "Alice consents to Algorithms X be run against her data set Y" (where the data-set lies in her private Resource Server).
Best.
/thomas/ <open-algorithms-identity-federation-1705.10880.pdf>

Please, let's avoid applying the word farming to people. The Matrix will be upon us soon enough. Adrian On Sun, Jun 4, 2017 at 8:24 AM, Mark Lizar <mark@openconsent.com> wrote:
Trust Farmer, what a great term !
Use of RAW personal data is clearly a barrier for trusted service development and this makes a lot of sense.
OPAL provides an economic, high value information argument. It also helps to illuminate a landscape for competitive service development with personal data that people control or co-manage. (Which is what I like the most:)
- Mark
On 4 Jun 2017, at 02:44, Thomas Hardjono <hardjono@mit.edu> wrote:
Thanks Mark,
An easy way to illustrate the "algorithm" is to think of an SQL statement (e.g. "compute average income of people living in Cambridge MA"). I send you the SQL statement, then you compute it in your back-end data repo (behind you firewalls), and then return the result to me.
Assuming a community of Data Providers could get into a consortium governed by a trust farmer, the could collectively come-up with say 20 of these SQL queries (vetted of course).
The point of the paper is that the barrier to sharing data (raw data) is getting impossible to overcome (think GDPR), and if data-rich institutions (i.e. Banks) want to play in the identity space by monetizing their data then OPAL provides a practical/palatable approach.
From the consent side, the user needs the ability to say: "I know my data is part of data-repo-X, and I give consent for algorithm A to be executed on data-repo-X".
The data-repository also needs a recipe to prove the use had given consent.
/thomas/
________________________________________ From: Mark Lizar [mark@openconsent.com] Sent: Saturday, June 03, 2017 4:09 PM To: Thomas Hardjono Cc: wg-uma@kantarainitiative.org; eve.maler@forgerock.com; hardjono@media.mit.edu Subject: Re: New paper on Identity, Data and next-gen Federation
Hi Thomas,
You made quick work of this wickedly hard problem :-). (To run with this a bit)
This looks a lot like the consent to authorise pattern we have been discussing. Which I would define as the :
1. - purpose specification 2. - to consent permission scopes 3. - to privacy policy model clauses 4. - to UMA 5. - to contract policy clauses 6. - to permission 7. - to user control
To make this sort of thing happen I have been working on the premise that a machine readable privacy policy, is configured with a purpose category that is defined with preference scopes (or a consent type that defined scopes and maybe also preference options) which then are associated with model privacy policy clauses.
This then boils down into a consent to authorise privacy policy scope profile for UMA access, which would then be used to defines the permission scopes and the associated contract model clauses that enable people to manage and control their own information.
At which point, the data subject could bring to the party their own license, which provides the model clauses, which match the aforementioned policies and defines how the preferences are set and managed.
The whole policy model will link with the permission scopes and preferences to basically sort out all the old school policy issues that are gumming up the works currently.
With the above framework in place,
The algorithms could be defined by the purpose category (i.e. industry) configured by the consent to authorise profile, and then controlled by the individual with model clauses that delegate to trusted third party applications. This provides the higher order transparency and accountability needed - or perhaps ethics - which the user is ultimately the master controller of via a data services provider.
It is conceivable that the user could bring their own algorithims, or have algorithims that police algorithims which is reminiscent of the original cop monkey pattern (if I am not mistaken)
- Mark
On 2 Jun 2017, at 16:03, Thomas Hardjono <hardjono@mit.edu<mailto:hardj ono@mit.edu>> wrote:
Eve, Mark, UMA folks,
This new paper (PDF) might be of some use in framing-up the next level of discussions regarding "identity" and "data" and how to"federate data".
Its permanent link is here:
http://arxiv.org/abs/1705.10880
I'm thinking that the Claims-gathering flows in UMA and also the Consent-Receipts flows could use an "algorithm-identifier" value, effectively stating that "Alice consents to Algorithms X be run against her data set Y" (where the data-set lies in her private Resource Server).
Best.
/thomas/ <open-algorithms-identity-federation-1705.10880.pdf>
_______________________________________________ WG-UMA mailing list WG-UMA@kantarainitiative.org http://kantarainitiative.org/mailman/listinfo/wg-uma
-- Adrian Gropper MD PROTECT YOUR FUTURE - RESTORE Health Privacy! HELP us fight for the right to control personal health data.

Great to see this discussion. Some time ago, I did a demo of the sequence of events in writing and clearing a paper check - right at the boundary between a contract and a payment. It shows each step as a record that references other records. Some of the other records define the meaning of a step, in both text and automation. The automation is expressed in (fake) granular bits of code, referenced by their hash. This would allow curation of granular bits of automation ("smart contracts" in a broad sense). Those could be validated by an organization or standards body. The demo was made with the pending EU PSD2 in mind, as a way for financial institutions to collaborate on APIs. But the principle is broadly applicable to transacting in general. http://www.commonaccord.org/index.php?action=doc&file=bqc/fr/bnpp/a5we/Account/Check/00001/06-Accept.md (Click on "Source" and follow links.) On Sun, Jun 4, 2017 at 5:33 AM, Adrian Gropper <agropper@healthurl.com> wrote:
Please, let's avoid applying the word farming to people. The Matrix will be upon us soon enough.
Adrian
On Sun, Jun 4, 2017 at 8:24 AM, Mark Lizar <mark@openconsent.com> wrote:
Trust Farmer, what a great term !
Use of RAW personal data is clearly a barrier for trusted service development and this makes a lot of sense.
OPAL provides an economic, high value information argument. It also helps to illuminate a landscape for competitive service development with personal data that people control or co-manage. (Which is what I like the most:)
- Mark
On 4 Jun 2017, at 02:44, Thomas Hardjono <hardjono@mit.edu> wrote:
Thanks Mark,
An easy way to illustrate the "algorithm" is to think of an SQL statement (e.g. "compute average income of people living in Cambridge MA"). I send you the SQL statement, then you compute it in your back-end data repo (behind you firewalls), and then return the result to me.
Assuming a community of Data Providers could get into a consortium governed by a trust farmer, the could collectively come-up with say 20 of these SQL queries (vetted of course).
The point of the paper is that the barrier to sharing data (raw data) is getting impossible to overcome (think GDPR), and if data-rich institutions (i.e. Banks) want to play in the identity space by monetizing their data then OPAL provides a practical/palatable approach.
From the consent side, the user needs the ability to say: "I know my data is part of data-repo-X, and I give consent for algorithm A to be executed on data-repo-X".
The data-repository also needs a recipe to prove the use had given consent.
/thomas/
________________________________________ From: Mark Lizar [mark@openconsent.com] Sent: Saturday, June 03, 2017 4:09 PM To: Thomas Hardjono Cc: wg-uma@kantarainitiative.org; eve.maler@forgerock.com; hardjono@media.mit.edu Subject: Re: New paper on Identity, Data and next-gen Federation
Hi Thomas,
You made quick work of this wickedly hard problem :-). (To run with this a bit)
This looks a lot like the consent to authorise pattern we have been discussing. Which I would define as the :
1. - purpose specification 2. - to consent permission scopes 3. - to privacy policy model clauses 4. - to UMA 5. - to contract policy clauses 6. - to permission 7. - to user control
To make this sort of thing happen I have been working on the premise that a machine readable privacy policy, is configured with a purpose category that is defined with preference scopes (or a consent type that defined scopes and maybe also preference options) which then are associated with model privacy policy clauses.
This then boils down into a consent to authorise privacy policy scope profile for UMA access, which would then be used to defines the permission scopes and the associated contract model clauses that enable people to manage and control their own information.
At which point, the data subject could bring to the party their own license, which provides the model clauses, which match the aforementioned policies and defines how the preferences are set and managed.
The whole policy model will link with the permission scopes and preferences to basically sort out all the old school policy issues that are gumming up the works currently.
With the above framework in place,
The algorithms could be defined by the purpose category (i.e. industry) configured by the consent to authorise profile, and then controlled by the individual with model clauses that delegate to trusted third party applications. This provides the higher order transparency and accountability needed - or perhaps ethics - which the user is ultimately the master controller of via a data services provider.
It is conceivable that the user could bring their own algorithims, or have algorithims that police algorithims which is reminiscent of the original cop monkey pattern (if I am not mistaken)
- Mark
On 2 Jun 2017, at 16:03, Thomas Hardjono <hardjono@mit.edu<mailto:hardj ono@mit.edu>> wrote:
Eve, Mark, UMA folks,
This new paper (PDF) might be of some use in framing-up the next level of discussions regarding "identity" and "data" and how to"federate data".
Its permanent link is here:
http://arxiv.org/abs/1705.10880
I'm thinking that the Claims-gathering flows in UMA and also the Consent-Receipts flows could use an "algorithm-identifier" value, effectively stating that "Alice consents to Algorithms X be run against her data set Y" (where the data-set lies in her private Resource Server).
Best.
/thomas/ <open-algorithms-identity-federation-1705.10880.pdf>
_______________________________________________ WG-UMA mailing list WG-UMA@kantarainitiative.org http://kantarainitiative.org/mailman/listinfo/wg-uma
--
Adrian Gropper MD
PROTECT YOUR FUTURE - RESTORE Health Privacy! HELP us fight for the right to control personal health data.
_______________________________________________ WG-UMA mailing list WG-UMA@kantarainitiative.org http://kantarainitiative.org/mailman/listinfo/wg-uma
-- @commonaccord

Thanks Jim, So in the paper I purposely omitted any mention of smart-contracts (too distracting). We have a small project on how to make the "algorithm" (think simple SQL statement) into a smart-contract (think Ethreum). The algorithm-smart-contract is triggered by the caller (querier) and it has to be parameterized (e.g. input the public keys of the querier and the data-repository; payments, etc). So this is pointing towards a future model for data-markets, where these algorithm-smart-contracts are available on many node of the P2P network, and anyone can use them (with payment of course). Not to be too hyperbolic, but think of futuristic "AI and bots" that make use of these various algorithm-smart-contracts. /thomas/ ________________________________________ From: wg-uma-bounces@kantarainitiative.org [wg-uma-bounces@kantarainitiative.org] on behalf of James Hazard [james.g.hazard@gmail.com] Sent: Sunday, June 04, 2017 9:31 AM To: Adrian Gropper Cc: wg-uma@kantarainitiative.org; eve.maler@forgerock.com; hardjono@media.mit.edu Subject: Re: [WG-UMA] New paper on Identity, Data and next-gen Federation Great to see this discussion. Some time ago, I did a demo of the sequence of events in writing and clearing a paper check - right at the boundary between a contract and a payment. It shows each step as a record that references other records. Some of the other records define the meaning of a step, in both text and automation. The automation is expressed in (fake) granular bits of code, referenced by their hash. This would allow curation of granular bits of automation ("smart contracts" in a broad sense). Those could be validated by an organization or standards body. The demo was made with the pending EU PSD2 in mind, as a way for financial institutions to collaborate on APIs. But the principle is broadly applicable to transacting in general. http://www.commonaccord.org/index.php?action=doc&file=bqc/fr/bnpp/a5we/Account/Check/00001/06-Accept.md (Click on "Source" and follow links.) On Sun, Jun 4, 2017 at 5:33 AM, Adrian Gropper <agropper@healthurl.com<mailto:agropper@healthurl.com>> wrote: Please, let's avoid applying the word farming to people. The Matrix will be upon us soon enough. Adrian On Sun, Jun 4, 2017 at 8:24 AM, Mark Lizar <mark@openconsent.com<mailto:mark@openconsent.com>> wrote: Trust Farmer, what a great term ! Use of RAW personal data is clearly a barrier for trusted service development and this makes a lot of sense. OPAL provides an economic, high value information argument. It also helps to illuminate a landscape for competitive service development with personal data that people control or co-manage. (Which is what I like the most:) - Mark
On 4 Jun 2017, at 02:44, Thomas Hardjono <hardjono@mit.edu<mailto:hardjono@mit.edu>> wrote:
Thanks Mark,
An easy way to illustrate the "algorithm" is to think of an SQL statement (e.g. "compute average income of people living in Cambridge MA"). I send you the SQL statement, then you compute it in your back-end data repo (behind you firewalls), and then return the result to me.
Assuming a community of Data Providers could get into a consortium governed by a trust farmer, the could collectively come-up with say 20 of these SQL queries (vetted of course).
The point of the paper is that the barrier to sharing data (raw data) is getting impossible to overcome (think GDPR), and if data-rich institutions (i.e. Banks) want to play in the identity space by monetizing their data then OPAL provides a practical/palatable approach.
From the consent side, the user needs the ability to say: "I know my data is part of data-repo-X, and I give consent for algorithm A to be executed on data-repo-X".
The data-repository also needs a recipe to prove the use had given consent.
/thomas/
________________________________________ From: Mark Lizar [mark@openconsent.com<mailto:mark@openconsent.com>] Sent: Saturday, June 03, 2017 4:09 PM To: Thomas Hardjono Cc: wg-uma@kantarainitiative.org<mailto:wg-uma@kantarainitiative.org>; eve.maler@forgerock.com<mailto:eve.maler@forgerock.com>; hardjono@media.mit.edu<mailto:hardjono@media.mit.edu> Subject: Re: New paper on Identity, Data and next-gen Federation
Hi Thomas,
You made quick work of this wickedly hard problem :-). (To run with this a bit)
This looks a lot like the consent to authorise pattern we have been discussing. Which I would define as the :
1. - purpose specification 2. - to consent permission scopes 3. - to privacy policy model clauses 4. - to UMA 5. - to contract policy clauses 6. - to permission 7. - to user control
To make this sort of thing happen I have been working on the premise that a machine readable privacy policy, is configured with a purpose category that is defined with preference scopes (or a consent type that defined scopes and maybe also preference options) which then are associated with model privacy policy clauses.
This then boils down into a consent to authorise privacy policy scope profile for UMA access, which would then be used to defines the permission scopes and the associated contract model clauses that enable people to manage and control their own information.
At which point, the data subject could bring to the party their own license, which provides the model clauses, which match the aforementioned policies and defines how the preferences are set and managed.
The whole policy model will link with the permission scopes and preferences to basically sort out all the old school policy issues that are gumming up the works currently.
With the above framework in place,
The algorithms could be defined by the purpose category (i.e. industry) configured by the consent to authorise profile, and then controlled by the individual with model clauses that delegate to trusted third party applications. This provides the higher order transparency and accountability needed - or perhaps ethics - which the user is ultimately the master controller of via a data services provider.
It is conceivable that the user could bring their own algorithims, or have algorithims that police algorithims which is reminiscent of the original cop monkey pattern (if I am not mistaken)
- Mark
On 2 Jun 2017, at 16:03, Thomas Hardjono <hardjono@mit.edu<mailto:hardjono@mit.edu><mailto:hardjono@mit.edu<mailto:hardjono@mit.edu>>> wrote:
Eve, Mark, UMA folks,
This new paper (PDF) might be of some use in framing-up the next level of discussions regarding "identity" and "data" and how to"federate data".
Its permanent link is here:
http://arxiv.org/abs/1705.10880
I'm thinking that the Claims-gathering flows in UMA and also the Consent-Receipts flows could use an "algorithm-identifier" value, effectively stating that "Alice consents to Algorithms X be run against her data set Y" (where the data-set lies in her private Resource Server).
Best.
/thomas/ <open-algorithms-identity-federation-1705.10880.pdf>
_______________________________________________ WG-UMA mailing list WG-UMA@kantarainitiative.org<mailto:WG-UMA@kantarainitiative.org> http://kantarainitiative.org/mailman/listinfo/wg-uma -- Adrian Gropper MD PROTECT YOUR FUTURE - RESTORE Health Privacy! HELP us fight for the right to control personal health data. _______________________________________________ WG-UMA mailing list WG-UMA@kantarainitiative.org<mailto:WG-UMA@kantarainitiative.org> http://kantarainitiative.org/mailman/listinfo/wg-uma -- @commonaccord

Thomas, yes. And I intend a very shallow meaning for "smart contracts," along the lines of our Smart Contracts Description Language paper for the W3 last year. A bit of code that effectuates some part of a transacting function, independent of the platform. Complemented by "prose objects" in the Ricardian triple of parameters, code and prose. Our focus remains developing legal prose objects and advocating for a "Center for Decentralized Governance." On Sun, Jun 4, 2017 at 7:07 AM, Thomas Hardjono <hardjono@mit.edu> wrote:
Thanks Jim,
So in the paper I purposely omitted any mention of smart-contracts (too distracting).
We have a small project on how to make the "algorithm" (think simple SQL statement) into a smart-contract (think Ethreum).
The algorithm-smart-contract is triggered by the caller (querier) and it has to be parameterized (e.g. input the public keys of the querier and the data-repository; payments, etc).
So this is pointing towards a future model for data-markets, where these algorithm-smart-contracts are available on many node of the P2P network, and anyone can use them (with payment of course).
Not to be too hyperbolic, but think of futuristic "AI and bots" that make use of these various algorithm-smart-contracts.
/thomas/
________________________________________ From: wg-uma-bounces@kantarainitiative.org [wg-uma-bounces@ kantarainitiative.org] on behalf of James Hazard [james.g.hazard@gmail.com ] Sent: Sunday, June 04, 2017 9:31 AM To: Adrian Gropper Cc: wg-uma@kantarainitiative.org; eve.maler@forgerock.com; hardjono@media.mit.edu Subject: Re: [WG-UMA] New paper on Identity, Data and next-gen Federation
Great to see this discussion.
Some time ago, I did a demo of the sequence of events in writing and clearing a paper check - right at the boundary between a contract and a payment. It shows each step as a record that references other records. Some of the other records define the meaning of a step, in both text and automation. The automation is expressed in (fake) granular bits of code, referenced by their hash.
This would allow curation of granular bits of automation ("smart contracts" in a broad sense). Those could be validated by an organization or standards body.
The demo was made with the pending EU PSD2 in mind, as a way for financial institutions to collaborate on APIs. But the principle is broadly applicable to transacting in general.
http://www.commonaccord.org/index.php?action=doc&file=bqc/ fr/bnpp/a5we/Account/Check/00001/06-Accept.md (Click on "Source" and follow links.)
On Sun, Jun 4, 2017 at 5:33 AM, Adrian Gropper <agropper@healthurl.com< mailto:agropper@healthurl.com>> wrote: Please, let's avoid applying the word farming to people. The Matrix will be upon us soon enough.
Adrian
On Sun, Jun 4, 2017 at 8:24 AM, Mark Lizar <mark@openconsent.com<mailto:m ark@openconsent.com>> wrote: Trust Farmer, what a great term !
Use of RAW personal data is clearly a barrier for trusted service development and this makes a lot of sense.
OPAL provides an economic, high value information argument. It also helps to illuminate a landscape for competitive service development with personal data that people control or co-manage. (Which is what I like the most:)
- Mark
On 4 Jun 2017, at 02:44, Thomas Hardjono <hardjono@mit.edu<mailto:hardj ono@mit.edu>> wrote:
Thanks Mark,
An easy way to illustrate the "algorithm" is to think of an SQL statement (e.g. "compute average income of people living in Cambridge MA"). I send you the SQL statement, then you compute it in your back-end data repo (behind you firewalls), and then return the result to me.
Assuming a community of Data Providers could get into a consortium governed by a trust farmer, the could collectively come-up with say 20 of these SQL queries (vetted of course).
The point of the paper is that the barrier to sharing data (raw data) is getting impossible to overcome (think GDPR), and if data-rich institutions (i.e. Banks) want to play in the identity space by monetizing their data then OPAL provides a practical/palatable approach.
From the consent side, the user needs the ability to say: "I know my data is part of data-repo-X, and I give consent for algorithm A to be executed on data-repo-X".
The data-repository also needs a recipe to prove the use had given consent.
/thomas/
________________________________________ From: Mark Lizar [mark@openconsent.com<mailto:mark@openconsent.com>] Sent: Saturday, June 03, 2017 4:09 PM To: Thomas Hardjono Cc: wg-uma@kantarainitiative.org<mailto:wg-uma@kantarainitiative.org>; eve.maler@forgerock.com<mailto:eve.maler@forgerock.com>; hardjono@media.mit.edu<mailto:hardjono@media.mit.edu> Subject: Re: New paper on Identity, Data and next-gen Federation
Hi Thomas,
You made quick work of this wickedly hard problem :-). (To run with this a bit)
This looks a lot like the consent to authorise pattern we have been discussing. Which I would define as the :
1. - purpose specification 2. - to consent permission scopes 3. - to privacy policy model clauses 4. - to UMA 5. - to contract policy clauses 6. - to permission 7. - to user control
To make this sort of thing happen I have been working on the premise that a machine readable privacy policy, is configured with a purpose category that is defined with preference scopes (or a consent type that defined scopes and maybe also preference options) which then are associated with model privacy policy clauses.
This then boils down into a consent to authorise privacy policy scope profile for UMA access, which would then be used to defines the permission scopes and the associated contract model clauses that enable people to manage and control their own information.
At which point, the data subject could bring to the party their own license, which provides the model clauses, which match the aforementioned policies and defines how the preferences are set and managed.
The whole policy model will link with the permission scopes and preferences to basically sort out all the old school policy issues that are gumming up the works currently.
With the above framework in place,
The algorithms could be defined by the purpose category (i.e. industry) configured by the consent to authorise profile, and then controlled by the individual with model clauses that delegate to trusted third party applications. This provides the higher order transparency and accountability needed - or perhaps ethics - which the user is ultimately the master controller of via a data services provider.
It is conceivable that the user could bring their own algorithims, or have algorithims that police algorithims which is reminiscent of the original cop monkey pattern (if I am not mistaken)
- Mark
On 2 Jun 2017, at 16:03, Thomas Hardjono <hardjono@mit.edu<mailto:hardj ono@mit.edu><mailto:hardjono@mit.edu<mailto:hardjono@mit.edu>>> wrote:
Eve, Mark, UMA folks,
This new paper (PDF) might be of some use in framing-up the next level of discussions regarding "identity" and "data" and how to"federate data".
Its permanent link is here:
http://arxiv.org/abs/1705.10880
I'm thinking that the Claims-gathering flows in UMA and also the Consent-Receipts flows could use an "algorithm-identifier" value, effectively stating that "Alice consents to Algorithms X be run against her data set Y" (where the data-set lies in her private Resource Server).
Best.
/thomas/ <open-algorithms-identity-federation-1705.10880.pdf>
_______________________________________________ WG-UMA mailing list WG-UMA@kantarainitiative.org<mailto:WG-UMA@kantarainitiative.org> http://kantarainitiative.org/mailman/listinfo/wg-uma
--
Adrian Gropper MD
PROTECT YOUR FUTURE - RESTORE Health Privacy! HELP us fight for the right to control personal health data.
_______________________________________________ WG-UMA mailing list WG-UMA@kantarainitiative.org<mailto:WG-UMA@kantarainitiative.org> http://kantarainitiative.org/mailman/listinfo/wg-uma
-- @commonaccord
-- @commonaccord

Hi Jim and Thomas - how widespread is the use of the Ricardian Contract? Im reading a paper by Ian Grigg, and see some very useful features for things like the Consent Receipt and consent management generally. Is this a generally-accepted construct? Where's the state of the art in this space? And please don't reply "smart contracts" ;-) Andrew. On Sun, Jun 4, 2017 at 7:38 AM James Hazard <james.g.hazard@gmail.com> wrote:
Thomas, yes. And I intend a very shallow meaning for "smart contracts," along the lines of our Smart Contracts Description Language paper for the W3 last year. A bit of code that effectuates some part of a transacting function, independent of the platform. Complemented by "prose objects" in the Ricardian triple of parameters, code and prose.
Our focus remains developing legal prose objects and advocating for a "Center for Decentralized Governance."
On Sun, Jun 4, 2017 at 7:07 AM, Thomas Hardjono <hardjono@mit.edu> wrote:
Thanks Jim,
So in the paper I purposely omitted any mention of smart-contracts (too distracting).
We have a small project on how to make the "algorithm" (think simple SQL statement) into a smart-contract (think Ethreum).
The algorithm-smart-contract is triggered by the caller (querier) and it has to be parameterized (e.g. input the public keys of the querier and the data-repository; payments, etc).
So this is pointing towards a future model for data-markets, where these algorithm-smart-contracts are available on many node of the P2P network, and anyone can use them (with payment of course).
Not to be too hyperbolic, but think of futuristic "AI and bots" that make use of these various algorithm-smart-contracts.
/thomas/
________________________________________ From: wg-uma-bounces@kantarainitiative.org [ wg-uma-bounces@kantarainitiative.org] on behalf of James Hazard [ james.g.hazard@gmail.com] Sent: Sunday, June 04, 2017 9:31 AM To: Adrian Gropper Cc: wg-uma@kantarainitiative.org; eve.maler@forgerock.com; hardjono@media.mit.edu Subject: Re: [WG-UMA] New paper on Identity, Data and next-gen Federation
Great to see this discussion.
Some time ago, I did a demo of the sequence of events in writing and clearing a paper check - right at the boundary between a contract and a payment. It shows each step as a record that references other records. Some of the other records define the meaning of a step, in both text and automation. The automation is expressed in (fake) granular bits of code, referenced by their hash.
This would allow curation of granular bits of automation ("smart contracts" in a broad sense). Those could be validated by an organization or standards body.
The demo was made with the pending EU PSD2 in mind, as a way for financial institutions to collaborate on APIs. But the principle is broadly applicable to transacting in general.
http://www.commonaccord.org/index.php?action=doc&file=bqc/fr/bnpp/a5we/Account/Check/00001/06-Accept.md (Click on "Source" and follow links.)
On Sun, Jun 4, 2017 at 5:33 AM, Adrian Gropper <agropper@healthurl.com <mailto:agropper@healthurl.com>> wrote: Please, let's avoid applying the word farming to people. The Matrix will be upon us soon enough.
Adrian
On Sun, Jun 4, 2017 at 8:24 AM, Mark Lizar <mark@openconsent.com<mailto: mark@openconsent.com>> wrote: Trust Farmer, what a great term !
Use of RAW personal data is clearly a barrier for trusted service development and this makes a lot of sense.
OPAL provides an economic, high value information argument. It also helps to illuminate a landscape for competitive service development with personal data that people control or co-manage. (Which is what I like the most:)
- Mark
On 4 Jun 2017, at 02:44, Thomas Hardjono <hardjono@mit.edu<mailto: hardjono@mit.edu>> wrote:
Thanks Mark,
An easy way to illustrate the "algorithm" is to think of an SQL statement (e.g. "compute average income of people living in Cambridge MA"). I send you the SQL statement, then you compute it in your back-end data repo (behind you firewalls), and then return the result to me.
Assuming a community of Data Providers could get into a consortium governed by a trust farmer, the could collectively come-up with say 20 of these SQL queries (vetted of course).
The point of the paper is that the barrier to sharing data (raw data) is getting impossible to overcome (think GDPR), and if data-rich institutions (i.e. Banks) want to play in the identity space by monetizing their data then OPAL provides a practical/palatable approach.
From the consent side, the user needs the ability to say: "I know my data is part of data-repo-X, and I give consent for algorithm A to be executed on data-repo-X".
The data-repository also needs a recipe to prove the use had given consent.
/thomas/
________________________________________ From: Mark Lizar [mark@openconsent.com<mailto:mark@openconsent.com>] Sent: Saturday, June 03, 2017 4:09 PM To: Thomas Hardjono Cc: wg-uma@kantarainitiative.org<mailto:wg-uma@kantarainitiative.org>; eve.maler@forgerock.com<mailto:eve.maler@forgerock.com>; hardjono@media.mit.edu<mailto:hardjono@media.mit.edu> Subject: Re: New paper on Identity, Data and next-gen Federation
Hi Thomas,
You made quick work of this wickedly hard problem :-). (To run with this a bit)
This looks a lot like the consent to authorise pattern we have been discussing. Which I would define as the :
1. - purpose specification 2. - to consent permission scopes 3. - to privacy policy model clauses 4. - to UMA 5. - to contract policy clauses 6. - to permission 7. - to user control
To make this sort of thing happen I have been working on the premise that a machine readable privacy policy, is configured with a purpose category that is defined with preference scopes (or a consent type that defined scopes and maybe also preference options) which then are associated with model privacy policy clauses.
This then boils down into a consent to authorise privacy policy scope profile for UMA access, which would then be used to defines the permission scopes and the associated contract model clauses that enable people to manage and control their own information.
At which point, the data subject could bring to the party their own license, which provides the model clauses, which match the aforementioned policies and defines how the preferences are set and managed.
The whole policy model will link with the permission scopes and preferences to basically sort out all the old school policy issues that are gumming up the works currently.
With the above framework in place,
The algorithms could be defined by the purpose category (i.e. industry) configured by the consent to authorise profile, and then controlled by the individual with model clauses that delegate to trusted third party applications. This provides the higher order transparency and accountability needed - or perhaps ethics - which the user is ultimately the master controller of via a data services provider.
It is conceivable that the user could bring their own algorithims, or have algorithims that police algorithims which is reminiscent of the original cop monkey pattern (if I am not mistaken)
- Mark
On 2 Jun 2017, at 16:03, Thomas Hardjono <hardjono@mit.edu<mailto: hardjono@mit.edu><mailto:hardjono@mit.edu<mailto:hardjono@mit.edu>>> wrote:
Eve, Mark, UMA folks,
This new paper (PDF) might be of some use in framing-up the next level of discussions regarding "identity" and "data" and how to"federate data".
Its permanent link is here:
http://arxiv.org/abs/1705.10880
I'm thinking that the Claims-gathering flows in UMA and also the Consent-Receipts flows could use an "algorithm-identifier" value, effectively stating that "Alice consents to Algorithms X be run against her data set Y" (where the data-set lies in her private Resource Server).
Best.
/thomas/ <open-algorithms-identity-federation-1705.10880.pdf>
_______________________________________________ WG-UMA mailing list WG-UMA@kantarainitiative.org<mailto:WG-UMA@kantarainitiative.org> http://kantarainitiative.org/mailman/listinfo/wg-uma
--
Adrian Gropper MD
PROTECT YOUR FUTURE - RESTORE Health Privacy! HELP us fight for the right to control personal health data.
_______________________________________________ WG-UMA mailing list WG-UMA@kantarainitiative.org<mailto:WG-UMA@kantarainitiative.org> http://kantarainitiative.org/mailman/listinfo/wg-uma
-- @commonaccord
-- @commonaccord _______________________________________________ WG-UMA mailing list WG-UMA@kantarainitiative.org http://kantarainitiative.org/mailman/listinfo/wg-uma
-- *Andrew Hughes *CISM CISSP Independent Consultant *In Turn Information Management Consulting* o +1 650.209.7542 m +1 250.888.9474 1249 Palmer Road, Victoria, BC V8P 2H8 AndrewHughes3000@gmail.com ca.linkedin.com/pub/andrew-hughes/a/58/682/ *Identity Management | IT Governance | Information Security *

HI folks - I am coming into this particular thread late, so I may be missing the initial point. Begging folks' indulgence if this is a "fork"(I hope it is not) - on the prior couple of points, query whether the AI and bots "use" the smart contracts or, instead, do the smart contracts actually "compose/comprise" the distributed AI (and "good bot") system? In other words, will smart contracts be the "synapses" of AI/SI systems? Stated otherwise, are standard smart contracts a way of hybridizing AI with SI (see below on "SI" concept), where the synthesis of multiple instantiations of the identical smart contracts across nodes (a la "distributed ledger") enables a form of suppression (self-regulation?) of errant smart contract application that might reflect intentional or accidental out-of-bounds activity on a network? Might standard contracts be used to establish a form of system "self" (albeit an intangible form of system "self") that can distinguish system "other" (like learned immunity systems), and police (and enforce) terms to reify the "self" that is reflective of the optimal system function (at least optimal as programmed in the smart contracts!) Are P2P networks a form of "SI" (Synthetic Intelligence) that is synthesized into a form of system intelligence that is greater than the sum of its parts, and which reflects an additional potential architecture for distributed AI? Part of our challenge in these sorts of distributed intelligence settings is , from a graph theory perspective, to get each "node" to be singing off the same song sheet. When those nodes are people, narratives (such as from religion, politics, social norms, economic incentives, ethical constructions, etc.) are the "song sheet/programming" of the nodes. When those nodes are commercial organizations, then economic goals (as required by their respective state laws of formation, articles, bylaws and contracts) and regulatory constraints are the "song sheet/programming" of the nodes. As an aside, a fundamental challenge of the Internet is that it is a shared infrastructure among groups that are singing off different song sheets. For example, when people are pursuing advice online about a social/emotional challenge (for example), they are also (below their awareness) sharing the communication space (and the communication) with commercial actors which (under current TOU/TOS arrangements, etc.) are able to access their communication and hence glean insight into that person's behavior/motivations and sell them some "placebo/face cream/whatever" that is offered to help fill the voids (both epidermic and existential, etc.) which fulfills the commercial goal of generating revenue, etc. Smart contracts will provide a mechanism for us to calibrate, normalize (and analyze, and improve) the balance of "information arbitrage" that is currently skewed in these shared spaces. Those imbalances are currently an artifact of yesterday's power relationships and its technical and legal embodiment in the Internet. Based on the foregoing observations/quasi-rant, it seems likely that smart contracts will find initial deployment in the simplest (lowest risk) contract settings - those with few input variables (called "conditions" in law) and applied in innocuous settings. For example, is a useful model of these "distributed smart contract" deployments the old (distributed) mechanical parking meter system, where each parking meter is built/programmed to register a certain amount of time for a given payment, and is constrained in its pre-programmed flexibility. The parking meter system, in gross, represents a form of mechanistic "neighborhood watch," but not against various crimes, but only the specific violation of unpaid parking. Simple, few variables, innocuous - reliable. Standard contracts (such as the fedex(tm) standard shipping terms, or the standard forms of PCI-DSS conformant credit card terms) create an intrinsic distributed (albeit intangible) risk-sharing topology that is reified with the serial agreements of its participants. In other words, every contracting party becomes an interested party in the reliability of the system when they sign up and hence depend on that system. This is particularly so in executory contract settings, or those with minimal signing ceremonies (i.e., "click to accept"), the recruitment of participants into the constraints of the agreement is made as simple as possible (so as not to distract the raw system recruit from their enthusiastic information-seeking behavior!) All of this is intended to convey the suggestion that there are strategies that can help to address the (appropriately) perceived dangers of "smart contracts" being enforced in inappropriate settings, or despite the failure of party expectations. Settings in which "Dumb contracts" (like mechanical parking meters, shipping agreements, etc.,) are already successfully applied at large scales may provide favorable piloting circumstances. The constraint of the system is a source of security, since it means that a parking meter cannot, for example, be accidentally or intentionally reprogrammed to do less innocuous things. Can smart contracts be "hardened" against such reprogramming? Perhaps (in the alternative) the immediate "externality" of the smart contract can detect and dampen functional drift of a given smart contract? One similar idea that we have been fussing with in some security discussions is a P2P "neighborhood watch" AMONG IoT devices. Distributed solutions for distributed challenges. But I digress. . . Kind regards, Scott Scott L. David Director of Policy Center for Information Assurance and Cybersecurity University of Washington - Applied Physics Laboratory w- 206-897-1466 m- 206-715-0859 Tw - @ScottLDavid ________________________________ From: wg-uma-bounces@kantarainitiative.org <wg-uma-bounces@kantarainitiative.org> on behalf of Thomas Hardjono <hardjono@mit.edu> Sent: Sunday, June 4, 2017 7:07 AM To: James Hazard; Adrian Gropper Cc: wg-uma@kantarainitiative.org; eve.maler@forgerock.com; hardjono@media.mit.edu Subject: Re: [WG-UMA] New paper on Identity, Data and next-gen Federation Thanks Jim, So in the paper I purposely omitted any mention of smart-contracts (too distracting). We have a small project on how to make the "algorithm" (think simple SQL statement) into a smart-contract (think Ethreum). The algorithm-smart-contract is triggered by the caller (querier) and it has to be parameterized (e.g. input the public keys of the querier and the data-repository; payments, etc). So this is pointing towards a future model for data-markets, where these algorithm-smart-contracts are available on many node of the P2P network, and anyone can use them (with payment of course). Not to be too hyperbolic, but think of futuristic "AI and bots" that make use of these various algorithm-smart-contracts. /thomas/ ________________________________________ From: wg-uma-bounces@kantarainitiative.org [wg-uma-bounces@kantarainitiative.org] on behalf of James Hazard [james.g.hazard@gmail.com] Sent: Sunday, June 04, 2017 9:31 AM To: Adrian Gropper Cc: wg-uma@kantarainitiative.org; eve.maler@forgerock.com; hardjono@media.mit.edu Subject: Re: [WG-UMA] New paper on Identity, Data and next-gen Federation Great to see this discussion. Some time ago, I did a demo of the sequence of events in writing and clearing a paper check - right at the boundary between a contract and a payment. It shows each step as a record that references other records. Some of the other records define the meaning of a step, in both text and automation. The automation is expressed in (fake) granular bits of code, referenced by their hash. This would allow curation of granular bits of automation ("smart contracts" in a broad sense). Those could be validated by an organization or standards body. The demo was made with the pending EU PSD2 in mind, as a way for financial institutions to collaborate on APIs. But the principle is broadly applicable to transacting in general. http://www.commonaccord.org/index.php?action=doc&file=bqc/fr/bnpp/a5we/Account/Check/00001/06-Accept.md (Click on "Source" and follow links.) On Sun, Jun 4, 2017 at 5:33 AM, Adrian Gropper <agropper@healthurl.com<mailto:agropper@healthurl.com>> wrote: Please, let's avoid applying the word farming to people. The Matrix will be upon us soon enough. Adrian On Sun, Jun 4, 2017 at 8:24 AM, Mark Lizar <mark@openconsent.com<mailto:mark@openconsent.com>> wrote: Trust Farmer, what a great term ! Use of RAW personal data is clearly a barrier for trusted service development and this makes a lot of sense. OPAL provides an economic, high value information argument. It also helps to illuminate a landscape for competitive service development with personal data that people control or co-manage. (Which is what I like the most:) - Mark
On 4 Jun 2017, at 02:44, Thomas Hardjono <hardjono@mit.edu<mailto:hardjono@mit.edu>> wrote:
Thanks Mark,
An easy way to illustrate the "algorithm" is to think of an SQL statement (e.g. "compute average income of people living in Cambridge MA"). I send you the SQL statement, then you compute it in your back-end data repo (behind you firewalls), and then return the result to me.
Assuming a community of Data Providers could get into a consortium governed by a trust farmer, the could collectively come-up with say 20 of these SQL queries (vetted of course).
The point of the paper is that the barrier to sharing data (raw data) is getting impossible to overcome (think GDPR), and if data-rich institutions (i.e. Banks) want to play in the identity space by monetizing their data then OPAL provides a practical/palatable approach.
From the consent side, the user needs the ability to say: "I know my data is part of data-repo-X, and I give consent for algorithm A to be executed on data-repo-X".
The data-repository also needs a recipe to prove the use had given consent.
/thomas/
________________________________________ From: Mark Lizar [mark@openconsent.com<mailto:mark@openconsent.com>] Sent: Saturday, June 03, 2017 4:09 PM To: Thomas Hardjono Cc: wg-uma@kantarainitiative.org<mailto:wg-uma@kantarainitiative.org>; eve.maler@forgerock.com<mailto:eve.maler@forgerock.com>; hardjono@media.mit.edu<mailto:hardjono@media.mit.edu> Subject: Re: New paper on Identity, Data and next-gen Federation
Hi Thomas,
You made quick work of this wickedly hard problem :-). (To run with this a bit)
This looks a lot like the consent to authorise pattern we have been discussing. Which I would define as the :
1. - purpose specification 2. - to consent permission scopes 3. - to privacy policy model clauses 4. - to UMA 5. - to contract policy clauses 6. - to permission 7. - to user control
To make this sort of thing happen I have been working on the premise that a machine readable privacy policy, is configured with a purpose category that is defined with preference scopes (or a consent type that defined scopes and maybe also preference options) which then are associated with model privacy policy clauses.
This then boils down into a consent to authorise privacy policy scope profile for UMA access, which would then be used to defines the permission scopes and the associated contract model clauses that enable people to manage and control their own information.
At which point, the data subject could bring to the party their own license, which provides the model clauses, which match the aforementioned policies and defines how the preferences are set and managed.
The whole policy model will link with the permission scopes and preferences to basically sort out all the old school policy issues that are gumming up the works currently.
With the above framework in place,
The algorithms could be defined by the purpose category (i.e. industry) configured by the consent to authorise profile, and then controlled by the individual with model clauses that delegate to trusted third party applications. This provides the higher order transparency and accountability needed - or perhaps ethics - which the user is ultimately the master controller of via a data services provider.
It is conceivable that the user could bring their own algorithims, or have algorithims that police algorithims which is reminiscent of the original cop monkey pattern (if I am not mistaken)
- Mark
On 2 Jun 2017, at 16:03, Thomas Hardjono <hardjono@mit.edu<mailto:hardjono@mit.edu><mailto:hardjono@mit.edu<mailto:hardjono@mit.edu>>> wrote:
Eve, Mark, UMA folks,
This new paper (PDF) might be of some use in framing-up the next level of discussions regarding "identity" and "data" and how to"federate data".
Its permanent link is here:
http://arxiv.org/abs/1705.10880
I'm thinking that the Claims-gathering flows in UMA and also the Consent-Receipts flows could use an "algorithm-identifier" value, effectively stating that "Alice consents to Algorithms X be run against her data set Y" (where the data-set lies in her private Resource Server).
Best.
/thomas/ <open-algorithms-identity-federation-1705.10880.pdf>
_______________________________________________ WG-UMA mailing list WG-UMA@kantarainitiative.org<mailto:WG-UMA@kantarainitiative.org> http://kantarainitiative.org/mailman/listinfo/wg-uma WG-UMA -- This is the User Managed Access WG mail list.<http://kantarainitiative.org/mailman/listinfo/wg-uma> kantarainitiative.org This is the Kantara Initiative User Managed Access Work Group mail list info page. List Details: - This mail list operates subject to the Kantara Initiative IPR ... -- Adrian Gropper MD PROTECT YOUR FUTURE - RESTORE Health Privacy! HELP us fight for the right to control personal health data. _______________________________________________ WG-UMA mailing list WG-UMA@kantarainitiative.org<mailto:WG-UMA@kantarainitiative.org> http://kantarainitiative.org/mailman/listinfo/wg-uma WG-UMA -- This is the User Managed Access WG mail list.<http://kantarainitiative.org/mailman/listinfo/wg-uma> kantarainitiative.org This is the Kantara Initiative User Managed Access Work Group mail list info page. List Details: - This mail list operates subject to the Kantara Initiative IPR ... -- @commonaccord _______________________________________________ WG-UMA mailing list WG-UMA@kantarainitiative.org http://kantarainitiative.org/mailman/listinfo/wg-uma WG-UMA -- This is the User Managed Access WG mail list.<http://kantarainitiative.org/mailman/listinfo/wg-uma> kantarainitiative.org This is the Kantara Initiative User Managed Access Work Group mail list info page. List Details: - This mail list operates subject to the Kantara Initiative IPR ...

Most of the below is over my head, or at least outside the bounds of my time and patience. In case it helps, I suggest that anything called “smart” that modifies a noun (such as “contract”), and substitutes machine agency for for the human kind—and will not permit humans to inspect its workings (in other words, operate as a black box, either intentionally or in effect)—should be off the table here. If it’s not, let me know and I’ll drop out. If it is off the table, let me add that the need to maintain personal agency should be our pole star as we battle in the night to maintain, or advance, what agency we have left in a world where we are already all but required to operate, always as second parties, with agency diminished by design, inside giant first party silos and silo wannabes. It may help to note that investment by giant silos in maintaining control of whole captive populations, and in giant silo wannabes, dwarfs investment in tools, such as UMA, supporting personal agency. I still think we can win, by the way. I don’t think we can do it by leveraging the rhetoric and passing obsessions (e.g. big data, AI, ML, adtech and martech) of those vested in manipulating us—even if it’s for our own good. It’s still more about selling us shit. FWIW—and my mind is not made up on this—“data markets,” as conceived so far, is still all about selling us shit, because all the imaginings and developments I’ve seen in that direction only operate in conditions where we are buying shit and others are selling shit or wanting to make us want to buy kind of shit other companies are paying to push at us. This bounds life inside markets, and bounds markets inside transactions. We risk losing at both while we fail to win at liberating our lives from those who only want to sell us shit. Doc
On Jun 4, 2017, at 12:53 PM, sldavid <sldavid@uw.edu> wrote:
HI folks - I am coming into this particular thread late, so I may be missing the initial point. Begging folks' indulgence if this is a "fork"(I hope it is not) - on the prior couple of points, query whether the AI and bots "use" the smart contracts or, instead, do the smart contracts actually "compose/comprise" the distributed AI (and "good bot") system? In other words, will smart contracts be the "synapses" of AI/SI systems?
Stated otherwise, are standard smart contracts a way of hybridizing AI with SI (see below on "SI" concept), where the synthesis of multiple instantiations of the identical smart contracts across nodes (a la "distributed ledger") enables a form of suppression (self-regulation?) of errant smart contract application that might reflect intentional or accidental out-of-bounds activity on a network?
Might standard contracts be used to establish a form of system "self" (albeit an intangible form of system "self") that can distinguish system "other" (like learned immunity systems), and police (and enforce) terms to reify the "self" that is reflective of the optimal system function (at least optimal as programmed in the smart contracts!)
Are P2P networks a form of "SI" (Synthetic Intelligence) that is synthesized into a form of system intelligence that is greater than the sum of its parts, and which reflects an additional potential architecture for distributed AI?
Part of our challenge in these sorts of distributed intelligence settings is , from a graph theory perspective, to get each "node" to be singing off the same song sheet. When those nodes are people, narratives (such as from religion, politics, social norms, economic incentives, ethical constructions, etc.) are the "song sheet/programming" of the nodes.
When those nodes are commercial organizations, then economic goals (as required by their respective state laws of formation, articles, bylaws and contracts) and regulatory constraints are the "song sheet/programming" of the nodes.
As an aside, a fundamental challenge of the Internet is that it is a shared infrastructure among groups that are singing off different song sheets. For example, when people are pursuing advice online about a social/emotional challenge (for example), they are also (below their awareness) sharing the communication space (and the communication) with commercial actors which (under current TOU/TOS arrangements, etc.) are able to access their communication and hence glean insight into that person's behavior/motivations and sell them some "placebo/face cream/whatever" that is offered to help fill the voids (both epidermic and existential, etc.) which fulfills the commercial goal of generating revenue, etc. Smart contracts will provide a mechanism for us to calibrate, normalize (and analyze, and improve) the balance of "information arbitrage" that is currently skewed in these shared spaces. Those imbalances are currently an artifact of yesterday's power relationships and its technical and legal embodiment in the Internet.
Based on the foregoing observations/quasi-rant, it seems likely that smart contracts will find initial deployment in the simplest (lowest risk) contract settings - those with few input variables (called "conditions" in law) and applied in innocuous settings.
For example, is a useful model of these "distributed smart contract" deployments the old (distributed) mechanical parking meter system, where each parking meter is built/programmed to register a certain amount of time for a given payment, and is constrained in its pre-programmed flexibility. The parking meter system, in gross, represents a form of mechanistic "neighborhood watch," but not against various crimes, but only the specific violation of unpaid parking. Simple, few variables, innocuous - reliable.
Standard contracts (such as the fedex(tm) standard shipping terms, or the standard forms of PCI-DSS conformant credit card terms) create an intrinsic distributed (albeit intangible) risk-sharing topology that is reified with the serial agreements of its participants. In other words, every contracting party becomes an interested party in the reliability of the system when they sign up and hence depend on that system. This is particularly so in executory contract settings, or those with minimal signing ceremonies (i.e., "click to accept"), the recruitment of participants into the constraints of the agreement is made as simple as possible (so as not to distract the raw system recruit from their enthusiastic information-seeking behavior!)
All of this is intended to convey the suggestion that there are strategies that can help to address the (appropriately) perceived dangers of "smart contracts" being enforced in inappropriate settings, or despite the failure of party expectations. Settings in which "Dumb contracts" (like mechanical parking meters, shipping agreements, etc.,) are already successfully applied at large scales may provide favorable piloting circumstances.
The constraint of the system is a source of security, since it means that a parking meter cannot, for example, be accidentally or intentionally reprogrammed to do less innocuous things. Can smart contracts be "hardened" against such reprogramming? Perhaps (in the alternative) the immediate "externality" of the smart contract can detect and dampen functional drift of a given smart contract? One similar idea that we have been fussing with in some security discussions is a P2P "neighborhood watch" AMONG IoT devices. Distributed solutions for distributed challenges.
But I digress. . .
Kind regards, Scott
Scott L. David
Director of Policy Center for Information Assurance and Cybersecurity University of Washington - Applied Physics Laboratory
w- 206-897-1466 m- 206-715-0859 Tw - @ScottLDavid
From: wg-uma-bounces@kantarainitiative.org <mailto:wg-uma-bounces@kantarainitiative.org> <wg-uma-bounces@kantarainitiative.org <mailto:wg-uma-bounces@kantarainitiative.org>> on behalf of Thomas Hardjono <hardjono@mit.edu <mailto:hardjono@mit.edu>> Sent: Sunday, June 4, 2017 7:07 AM To: James Hazard; Adrian Gropper Cc: wg-uma@kantarainitiative.org <mailto:wg-uma@kantarainitiative.org>; eve.maler@forgerock.com <mailto:eve.maler@forgerock.com>; hardjono@media.mit.edu <mailto:hardjono@media.mit.edu> Subject: Re: [WG-UMA] New paper on Identity, Data and next-gen Federation
Thanks Jim,
So in the paper I purposely omitted any mention of smart-contracts (too distracting).
We have a small project on how to make the "algorithm" (think simple SQL statement) into a smart-contract (think Ethreum).
The algorithm-smart-contract is triggered by the caller (querier) and it has to be parameterized (e.g. input the public keys of the querier and the data-repository; payments, etc).
So this is pointing towards a future model for data-markets, where these algorithm-smart-contracts are available on many node of the P2P network, and anyone can use them (with payment of course).
Not to be too hyperbolic, but think of futuristic "AI and bots" that make use of these various algorithm-smart-contracts.
/thomas/
________________________________________ From: wg-uma-bounces@kantarainitiative.org <mailto:wg-uma-bounces@kantarainitiative.org> [wg-uma-bounces@kantarainitiative.org <mailto:wg-uma-bounces@kantarainitiative.org>] on behalf of James Hazard [james.g.hazard@gmail.com <mailto:james.g.hazard@gmail.com>] Sent: Sunday, June 04, 2017 9:31 AM To: Adrian Gropper Cc: wg-uma@kantarainitiative.org <mailto:wg-uma@kantarainitiative.org>; eve.maler@forgerock.com <mailto:eve.maler@forgerock.com>; hardjono@media.mit.edu <mailto:hardjono@media.mit.edu> Subject: Re: [WG-UMA] New paper on Identity, Data and next-gen Federation
Great to see this discussion.
Some time ago, I did a demo of the sequence of events in writing and clearing a paper check - right at the boundary between a contract and a payment. It shows each step as a record that references other records. Some of the other records define the meaning of a step, in both text and automation. The automation is expressed in (fake) granular bits of code, referenced by their hash.
This would allow curation of granular bits of automation ("smart contracts" in a broad sense). Those could be validated by an organization or standards body.
The demo was made with the pending EU PSD2 in mind, as a way for financial institutions to collaborate on APIs. But the principle is broadly applicable to transacting in general.
http://www.commonaccord.org/index.php?action=doc&file=bqc/fr/bnpp/a5we/Account/Check/00001/06-Accept.md <http://www.commonaccord.org/index.php?action=doc&file=bqc/fr/bnpp/a5we/Account/Check/00001/06-Accept.md> (Click on "Source" and follow links.)
On Sun, Jun 4, 2017 at 5:33 AM, Adrian Gropper <agropper@healthurl.com <mailto:agropper@healthurl.com><mailto:agropper@healthurl.com <mailto:agropper@healthurl.com>>> wrote: Please, let's avoid applying the word farming to people. The Matrix will be upon us soon enough.
Adrian
On Sun, Jun 4, 2017 at 8:24 AM, Mark Lizar <mark@openconsent.com <mailto:mark@openconsent.com><mailto:mark@openconsent.com <mailto:mark@openconsent.com>>> wrote: Trust Farmer, what a great term !
Use of RAW personal data is clearly a barrier for trusted service development and this makes a lot of sense.
OPAL provides an economic, high value information argument. It also helps to illuminate a landscape for competitive service development with personal data that people control or co-manage. (Which is what I like the most:)
- Mark
On 4 Jun 2017, at 02:44, Thomas Hardjono <hardjono@mit.edu <mailto:hardjono@mit.edu><mailto:hardjono@mit.edu <mailto:hardjono@mit.edu>>> wrote:
Thanks Mark,
An easy way to illustrate the "algorithm" is to think of an SQL statement (e.g. "compute average income of people living in Cambridge MA"). I send you the SQL statement, then you compute it in your back-end data repo (behind you firewalls), and then return the result to me.
Assuming a community of Data Providers could get into a consortium governed by a trust farmer, the could collectively come-up with say 20 of these SQL queries (vetted of course).
The point of the paper is that the barrier to sharing data (raw data) is getting impossible to overcome (think GDPR), and if data-rich institutions (i.e. Banks) want to play in the identity space by monetizing their data then OPAL provides a practical/palatable approach.
From the consent side, the user needs the ability to say: "I know my data is part of data-repo-X, and I give consent for algorithm A to be executed on data-repo-X".
The data-repository also needs a recipe to prove the use had given consent.
/thomas/
________________________________________ From: Mark Lizar [mark@openconsent.com <mailto:mark@openconsent.com><mailto:mark@openconsent.com <mailto:mark@openconsent.com>>] Sent: Saturday, June 03, 2017 4:09 PM To: Thomas Hardjono Cc: wg-uma@kantarainitiative.org <mailto:wg-uma@kantarainitiative.org><mailto:wg-uma@kantarainitiative.org <mailto:wg-uma@kantarainitiative.org>>; eve.maler@forgerock.com <mailto:eve.maler@forgerock.com><mailto:eve.maler@forgerock.com <mailto:eve.maler@forgerock.com>>; hardjono@media.mit.edu <mailto:hardjono@media.mit.edu><mailto:hardjono@media.mit.edu <mailto:hardjono@media.mit.edu>> Subject: Re: New paper on Identity, Data and next-gen Federation
Hi Thomas,
You made quick work of this wickedly hard problem :-). (To run with this a bit)
This looks a lot like the consent to authorise pattern we have been discussing. Which I would define as the :
1. - purpose specification 2. - to consent permission scopes 3. - to privacy policy model clauses 4. - to UMA 5. - to contract policy clauses 6. - to permission 7. - to user control
To make this sort of thing happen I have been working on the premise that a machine readable privacy policy, is configured with a purpose category that is defined with preference scopes (or a consent type that defined scopes and maybe also preference options) which then are associated with model privacy policy clauses.
This then boils down into a consent to authorise privacy policy scope profile for UMA access, which would then be used to defines the permission scopes and the associated contract model clauses that enable people to manage and control their own information.
At which point, the data subject could bring to the party their own license, which provides the model clauses, which match the aforementioned policies and defines how the preferences are set and managed.
The whole policy model will link with the permission scopes and preferences to basically sort out all the old school policy issues that are gumming up the works currently.
With the above framework in place,
The algorithms could be defined by the purpose category (i.e. industry) configured by the consent to authorise profile, and then controlled by the individual with model clauses that delegate to trusted third party applications. This provides the higher order transparency and accountability needed - or perhaps ethics - which the user is ultimately the master controller of via a data services provider.
It is conceivable that the user could bring their own algorithims, or have algorithims that police algorithims which is reminiscent of the original cop monkey pattern (if I am not mistaken)
- Mark
On 2 Jun 2017, at 16:03, Thomas Hardjono <hardjono@mit.edu <mailto:hardjono@mit.edu><mailto:hardjono@mit.edu <mailto:hardjono@mit.edu>><mailto:hardjono@mit.edu <mailto:hardjono@mit.edu><mailto:hardjono@mit.edu <mailto:hardjono@mit.edu>>>> wrote:
Eve, Mark, UMA folks,
This new paper (PDF) might be of some use in framing-up the next level of discussions regarding "identity" and "data" and how to"federate data".
Its permanent link is here:
http://arxiv.org/abs/1705.10880 <http://arxiv.org/abs/1705.10880>
I'm thinking that the Claims-gathering flows in UMA and also the Consent-Receipts flows could use an "algorithm-identifier" value, effectively stating that "Alice consents to Algorithms X be run against her data set Y" (where the data-set lies in her private Resource Server).
Best.
/thomas/ <open-algorithms-identity-federation-1705.10880.pdf>
_______________________________________________ WG-UMA mailing list WG-UMA@kantarainitiative.org <mailto:WG-UMA@kantarainitiative.org><mailto:WG-UMA@kantarainitiative.org <mailto:WG-UMA@kantarainitiative.org>> http://kantarainitiative.org/mailman/listinfo/wg-uma <http://kantarainitiative.org/mailman/listinfo/wg-uma> WG-UMA -- This is the User Managed Access WG mail list. <http://kantarainitiative.org/mailman/listinfo/wg-uma> kantarainitiative.org <http://kantarainitiative.org/> This is the Kantara Initiative User Managed Access Work Group mail list info page. List Details: - This mail list operates subject to the Kantara Initiative IPR ...
--
Adrian Gropper MD
PROTECT YOUR FUTURE - RESTORE Health Privacy! HELP us fight for the right to control personal health data.
_______________________________________________ WG-UMA mailing list WG-UMA@kantarainitiative.org <mailto:WG-UMA@kantarainitiative.org><mailto:WG-UMA@kantarainitiative.org <mailto:WG-UMA@kantarainitiative.org>> http://kantarainitiative.org/mailman/listinfo/wg-uma <http://kantarainitiative.org/mailman/listinfo/wg-uma> WG-UMA -- This is the User Managed Access WG mail list. <http://kantarainitiative.org/mailman/listinfo/wg-uma> kantarainitiative.org <http://kantarainitiative.org/> This is the Kantara Initiative User Managed Access Work Group mail list info page. List Details: - This mail list operates subject to the Kantara Initiative IPR ...
-- @commonaccord _______________________________________________ WG-UMA mailing list WG-UMA@kantarainitiative.org <mailto:WG-UMA@kantarainitiative.org> http://kantarainitiative.org/mailman/listinfo/wg-uma <http://kantarainitiative.org/mailman/listinfo/wg-uma> WG-UMA -- This is the User Managed Access WG mail list. <http://kantarainitiative.org/mailman/listinfo/wg-uma> kantarainitiative.org <http://kantarainitiative.org/> This is the Kantara Initiative User Managed Access Work Group mail list info page. List Details: - This mail list operates subject to the Kantara Initiative IPR ...
_______________________________________________ WG-UMA mailing list WG-UMA@kantarainitiative.org <mailto:WG-UMA@kantarainitiative.org> http://kantarainitiative.org/mailman/listinfo/wg-uma <http://kantarainitiative.org/mailman/listinfo/wg-uma>

Scott, Very insightful, as usual :-)
query whether the AI and bots "use" the smart contracts or, instead, do the smart contracts actually "compose/comprise" the distributed AI (and "good bot") system? In other words, will smart contracts be the "synapses" of AI/SI systems?
It could be. Also, just as they are different hierarchies and importance-levels to biological synapses, these could be different levels or hierarchies of smart-contracts. That is, smart-contracts (bots) to manage other smart-contracts (bots). What might be different with smart-contracts (at least the way we understand it and define today) is that (a) all node must have equal access to executing a smart-contract (ie. each node can load the same smart-contract) and (b) some consensus agreement protocol is used to arrive a stable state (agreed state) among the nodes executing the smart-contract. I don't know if biological "synapses" use such mechanisms (i.e. actually doing agreement among "synapses").
Stated otherwise, are standard smart contracts a way of hybridizing AI with SI (see below on "SI" concept), where the synthesis of multiple instantiations of the identical smart contracts across nodes (a la "distributed ledger") enables a form of suppression (self-regulation?) of errant smart contract application that might reflect intentional or accidental out-of-bounds activity on a network?
So the logic to effect suppression or self-regulation must be part the consensus protocol among the nodes. One thing I'm interested in looking at is "chained" or cascading smart-contracts (where sc-A triggers sc-B and then C and so on) -- and the various possible deadlocks or loops it can create. Recall that one of the challenges of the early internet was coming with loop-free or loop-detecting routing protocols (thinks of the IS-IS protocol and the OSPF protocol for local routing). /thomas/ ________________________________________ From: sldavid [sldavid@uw.edu] Sent: Sunday, June 04, 2017 12:53 PM To: Thomas Hardjono; James Hazard; Adrian Gropper Cc: wg-uma@kantarainitiative.org; eve.maler@forgerock.com; hardjono@media.mit.edu Subject: Re: [WG-UMA] New paper on Identity, Data and next-gen Federation HI folks - I am coming into this particular thread late, so I may be missing the initial point. Begging folks' indulgence if this is a "fork"(I hope it is not) - on the prior couple of points, query whether the AI and bots "use" the smart contracts or, instead, do the smart contracts actually "compose/comprise" the distributed AI (and "good bot") system? In other words, will smart contracts be the "synapses" of AI/SI systems? Stated otherwise, are standard smart contracts a way of hybridizing AI with SI (see below on "SI" concept), where the synthesis of multiple instantiations of the identical smart contracts across nodes (a la "distributed ledger") enables a form of suppression (self-regulation?) of errant smart contract application that might reflect intentional or accidental out-of-bounds activity on a network? Might standard contracts be used to establish a form of system "self" (albeit an intangible form of system "self") that can distinguish system "other" (like learned immunity systems), and police (and enforce) terms to reify the "self" that is reflective of the optimal system function (at least optimal as programmed in the smart contracts!) Are P2P networks a form of "SI" (Synthetic Intelligence) that is synthesized into a form of system intelligence that is greater than the sum of its parts, and which reflects an additional potential architecture for distributed AI? Part of our challenge in these sorts of distributed intelligence settings is , from a graph theory perspective, to get each "node" to be singing off the same song sheet. When those nodes are people, narratives (such as from religion, politics, social norms, economic incentives, ethical constructions, etc.) are the "song sheet/programming" of the nodes. When those nodes are commercial organizations, then economic goals (as required by their respective state laws of formation, articles, bylaws and contracts) and regulatory constraints are the "song sheet/programming" of the nodes. As an aside, a fundamental challenge of the Internet is that it is a shared infrastructure among groups that are singing off different song sheets. For example, when people are pursuing advice online about a social/emotional challenge (for example), they are also (below their awareness) sharing the communication space (and the communication) with commercial actors which (under current TOU/TOS arrangements, etc.) are able to access their communication and hence glean insight into that person's behavior/motivations and sell them some "placebo/face cream/whatever" that is offered to help fill the voids (both epidermic and existential, etc.) which fulfills the commercial goal of generating revenue, etc. Smart contracts will provide a mechanism for us to calibrate, normalize (and analyze, and improve) the balance of "information arbitrage" that is currently skewed in these shared spaces. Those imbalances are currently an artifact of yesterday's power relationships and its technical and legal embodiment in the Internet. Based on the foregoing observations/quasi-rant, it seems likely that smart contracts will find initial deployment in the simplest (lowest risk) contract settings - those with few input variables (called "conditions" in law) and applied in innocuous settings. For example, is a useful model of these "distributed smart contract" deployments the old (distributed) mechanical parking meter system, where each parking meter is built/programmed to register a certain amount of time for a given payment, and is constrained in its pre-programmed flexibility. The parking meter system, in gross, represents a form of mechanistic "neighborhood watch," but not against various crimes, but only the specific violation of unpaid parking. Simple, few variables, innocuous - reliable. Standard contracts (such as the fedex(tm) standard shipping terms, or the standard forms of PCI-DSS conformant credit card terms) create an intrinsic distributed (albeit intangible) risk-sharing topology that is reified with the serial agreements of its participants. In other words, every contracting party becomes an interested party in the reliability of the system when they sign up and hence depend on that system. This is particularly so in executory contract settings, or those with minimal signing ceremonies (i.e., "click to accept"), the recruitment of participants into the constraints of the agreement is made as simple as possible (so as not to distract the raw system recruit from their enthusiastic information-seeking behavior!) All of this is intended to convey the suggestion that there are strategies that can help to address the (appropriately) perceived dangers of "smart contracts" being enforced in inappropriate settings, or despite the failure of party expectations. Settings in which "Dumb contracts" (like mechanical parking meters, shipping agreements, etc.,) are already successfully applied at large scales may provide favorable piloting circumstances. The constraint of the system is a source of security, since it means that a parking meter cannot, for example, be accidentally or intentionally reprogrammed to do less innocuous things. Can smart contracts be "hardened" against such reprogramming? Perhaps (in the alternative) the immediate "externality" of the smart contract can detect and dampen functional drift of a given smart contract? One similar idea that we have been fussing with in some security discussions is a P2P "neighborhood watch" AMONG IoT devices. Distributed solutions for distributed challenges. But I digress. . . Kind regards, Scott Scott L. David Director of Policy Center for Information Assurance and Cybersecurity University of Washington - Applied Physics Laboratory w- 206-897-1466 m- 206-715-0859 Tw - @ScottLDavid ________________________________ From: wg-uma-bounces@kantarainitiative.org <wg-uma-bounces@kantarainitiative.org> on behalf of Thomas Hardjono <hardjono@mit.edu> Sent: Sunday, June 4, 2017 7:07 AM To: James Hazard; Adrian Gropper Cc: wg-uma@kantarainitiative.org; eve.maler@forgerock.com; hardjono@media.mit.edu Subject: Re: [WG-UMA] New paper on Identity, Data and next-gen Federation Thanks Jim, So in the paper I purposely omitted any mention of smart-contracts (too distracting). We have a small project on how to make the "algorithm" (think simple SQL statement) into a smart-contract (think Ethreum). The algorithm-smart-contract is triggered by the caller (querier) and it has to be parameterized (e.g. input the public keys of the querier and the data-repository; payments, etc). So this is pointing towards a future model for data-markets, where these algorithm-smart-contracts are available on many node of the P2P network, and anyone can use them (with payment of course). Not to be too hyperbolic, but think of futuristic "AI and bots" that make use of these various algorithm-smart-contracts. /thomas/ ________________________________________ From: wg-uma-bounces@kantarainitiative.org [wg-uma-bounces@kantarainitiative.org] on behalf of James Hazard [james.g.hazard@gmail.com] Sent: Sunday, June 04, 2017 9:31 AM To: Adrian Gropper Cc: wg-uma@kantarainitiative.org; eve.maler@forgerock.com; hardjono@media.mit.edu Subject: Re: [WG-UMA] New paper on Identity, Data and next-gen Federation Great to see this discussion. Some time ago, I did a demo of the sequence of events in writing and clearing a paper check - right at the boundary between a contract and a payment. It shows each step as a record that references other records. Some of the other records define the meaning of a step, in both text and automation. The automation is expressed in (fake) granular bits of code, referenced by their hash. This would allow curation of granular bits of automation ("smart contracts" in a broad sense). Those could be validated by an organization or standards body. The demo was made with the pending EU PSD2 in mind, as a way for financial institutions to collaborate on APIs. But the principle is broadly applicable to transacting in general. http://www.commonaccord.org/index.php?action=doc&file=bqc/fr/bnpp/a5we/Account/Check/00001/06-Accept.md (Click on "Source" and follow links.) On Sun, Jun 4, 2017 at 5:33 AM, Adrian Gropper <agropper@healthurl.com<mailto:agropper@healthurl.com>> wrote: Please, let's avoid applying the word farming to people. The Matrix will be upon us soon enough. Adrian On Sun, Jun 4, 2017 at 8:24 AM, Mark Lizar <mark@openconsent.com<mailto:mark@openconsent.com>> wrote: Trust Farmer, what a great term ! Use of RAW personal data is clearly a barrier for trusted service development and this makes a lot of sense. OPAL provides an economic, high value information argument. It also helps to illuminate a landscape for competitive service development with personal data that people control or co-manage. (Which is what I like the most:) - Mark
On 4 Jun 2017, at 02:44, Thomas Hardjono <hardjono@mit.edu<mailto:hardjono@mit.edu>> wrote:
Thanks Mark,
An easy way to illustrate the "algorithm" is to think of an SQL statement (e.g. "compute average income of people living in Cambridge MA"). I send you the SQL statement, then you compute it in your back-end data repo (behind you firewalls), and then return the result to me.
Assuming a community of Data Providers could get into a consortium governed by a trust farmer, the could collectively come-up with say 20 of these SQL queries (vetted of course).
The point of the paper is that the barrier to sharing data (raw data) is getting impossible to overcome (think GDPR), and if data-rich institutions (i.e. Banks) want to play in the identity space by monetizing their data then OPAL provides a practical/palatable approach.
From the consent side, the user needs the ability to say: "I know my data is part of data-repo-X, and I give consent for algorithm A to be executed on data-repo-X".
The data-repository also needs a recipe to prove the use had given consent.
/thomas/
________________________________________ From: Mark Lizar [mark@openconsent.com<mailto:mark@openconsent.com>] Sent: Saturday, June 03, 2017 4:09 PM To: Thomas Hardjono Cc: wg-uma@kantarainitiative.org<mailto:wg-uma@kantarainitiative.org>; eve.maler@forgerock.com<mailto:eve.maler@forgerock.com>; hardjono@media.mit.edu<mailto:hardjono@media.mit.edu> Subject: Re: New paper on Identity, Data and next-gen Federation
Hi Thomas,
You made quick work of this wickedly hard problem :-). (To run with this a bit)
This looks a lot like the consent to authorise pattern we have been discussing. Which I would define as the :
1. - purpose specification 2. - to consent permission scopes 3. - to privacy policy model clauses 4. - to UMA 5. - to contract policy clauses 6. - to permission 7. - to user control
To make this sort of thing happen I have been working on the premise that a machine readable privacy policy, is configured with a purpose category that is defined with preference scopes (or a consent type that defined scopes and maybe also preference options) which then are associated with model privacy policy clauses.
This then boils down into a consent to authorise privacy policy scope profile for UMA access, which would then be used to defines the permission scopes and the associated contract model clauses that enable people to manage and control their own information.
At which point, the data subject could bring to the party their own license, which provides the model clauses, which match the aforementioned policies and defines how the preferences are set and managed.
The whole policy model will link with the permission scopes and preferences to basically sort out all the old school policy issues that are gumming up the works currently.
With the above framework in place,
The algorithms could be defined by the purpose category (i.e. industry) configured by the consent to authorise profile, and then controlled by the individual with model clauses that delegate to trusted third party applications. This provides the higher order transparency and accountability needed - or perhaps ethics - which the user is ultimately the master controller of via a data services provider.
It is conceivable that the user could bring their own algorithims, or have algorithims that police algorithims which is reminiscent of the original cop monkey pattern (if I am not mistaken)
- Mark
On 2 Jun 2017, at 16:03, Thomas Hardjono <hardjono@mit.edu<mailto:hardjono@mit.edu><mailto:hardjono@mit.edu<mailto:hardjono@mit.edu>>> wrote:
Eve, Mark, UMA folks,
This new paper (PDF) might be of some use in framing-up the next level of discussions regarding "identity" and "data" and how to"federate data".
Its permanent link is here:
http://arxiv.org/abs/1705.10880
I'm thinking that the Claims-gathering flows in UMA and also the Consent-Receipts flows could use an "algorithm-identifier" value, effectively stating that "Alice consents to Algorithms X be run against her data set Y" (where the data-set lies in her private Resource Server).
Best.
/thomas/ <open-algorithms-identity-federation-1705.10880.pdf>
_______________________________________________ WG-UMA mailing list WG-UMA@kantarainitiative.org<mailto:WG-UMA@kantarainitiative.org> http://kantarainitiative.org/mailman/listinfo/wg-uma WG-UMA -- This is the User Managed Access WG mail list.<http://kantarainitiative.org/mailman/listinfo/wg-uma> kantarainitiative.org This is the Kantara Initiative User Managed Access Work Group mail list info page. List Details: - This mail list operates subject to the Kantara Initiative IPR ... -- Adrian Gropper MD PROTECT YOUR FUTURE - RESTORE Health Privacy! HELP us fight for the right to control personal health data. _______________________________________________ WG-UMA mailing list WG-UMA@kantarainitiative.org<mailto:WG-UMA@kantarainitiative.org> http://kantarainitiative.org/mailman/listinfo/wg-uma WG-UMA -- This is the User Managed Access WG mail list.<http://kantarainitiative.org/mailman/listinfo/wg-uma> kantarainitiative.org This is the Kantara Initiative User Managed Access Work Group mail list info page. List Details: - This mail list operates subject to the Kantara Initiative IPR ... -- @commonaccord _______________________________________________ WG-UMA mailing list WG-UMA@kantarainitiative.org http://kantarainitiative.org/mailman/listinfo/wg-uma WG-UMA -- This is the User Managed Access WG mail list.<http://kantarainitiative.org/mailman/listinfo/wg-uma> kantarainitiative.org This is the Kantara Initiative User Managed Access Work Group mail list info page. List Details: - This mail list operates subject to the Kantara Initiative IPR ...

Thomas wrote: "One thing I'm interested in looking at is "chained" or cascading smart-contracts (where sc-A triggers sc-B and then C and so on) -- and the various possible deadlocks or loops it can create. Recall that one of the challenges of the early internet was coming with loop-free or loop-detecting routing protocols (thinks of the IS-IS protocol and the OSPF protocol for local routing)." My response to Thomas indulges in some bits of esoterica, and might not be of interest to the entire group. However, thinking about "chains" "loops" and other such relationship forms as both potential challenges and as features that can be recruited into service as intentional, helpful structural constraints on interrelated systems (whether smart contracts, or nerve cells or Nucleic acids - Notably all of which are information carrying systems) is, I think, absolutely critical to achieving distributed scaled systems for information, so I am sharing the reply with the group. My apologies in advance if these points seem even more "out there" than my usual contributions. Consider, however, that these are all examples taken from various other types of "information systems" and therefore which might provide evidence of some intrinsic structural approaches to information governance, etc. On the issue of chained contracts and loops, etc., among the points we should discuss (in the appropriate forum!) are: 1. What are the implications of analyzing the 2008 financial "break" as the result of cascading "cross default" provisions of various financial agreements? What are the contractual and market mechanisms that might be applied to moderate those cascades? What mechanisms are deployed to moderate positive feedback in other contexts? 2. What is the nature of feedback (whether positive and/or negative) in propagating and/or mitigating dynamic chains? How might certain loop structures control chain propagation (like drilling a hole in at the end of a crack in a glass window). 3. How might loops be purposefully constructed and introduced into help initiate "risk-mitigation-by-design" in systems. There are multiple examples of "loops" that manifest across different vectors in different domains to prevent altered configurations that can be harmful to system "normal" operation. Examples include: * (a) chemical loops - the positive and negative relationships among neurotransmitters at synapses and their various "excitory" and "inhibitory" effects interactions. * (b) see related issues of chemical "bistability" such as http://bmcsystbiol.biomedcentral.com/articles/10.1186/1752-0509-3-90http://b... . How might intentional loops help a system to acquire one of the designed "stable states" for which it is developed? * (c) physical loops - DNA, RNA and proteins - Loop structures prevent lots of genetic (and epigenetic) mischief in Nucleic Acids. Various harmful mutations, etc. that could result in harmful configurations for DNA and RNA and also of secondary and tertiary folding structures of proteins are constrained by various forms of loop structures, etc. See, e.g., "RNA/DNA Hairpin loop structures" at https://en.wikipedia.org/wiki/Stem-loop * (d) contractual loops - contracts and laws provide various loop structures that help to introduce "strain relief" into systems. Consider such devices as "cure periods" for contract default as temporal "loops" introduced as "strain relief in contract duties/rights contexts. 4. This list is just off the top. I am certain that further reflection will reveal other structures in which loops and chains play a pathological and/or palliative role in systems. Where distributed systems rely on each "node" being "programmed" with the same narrative (aka "singing off the same song sheet"), loop and chain structures embedded in the architecture can help to assure reliability-by-default. Good stuff. Kind regards, Scott Scott L. David Director of Policy Center for Information Assurance and Cybersecurity University of Washington - Applied Physics Laboratory w- 206-897-1466 m- 206-715-0859 Tw - @ScottLDavid ________________________________ From: Thomas Hardjono <hardjono@mit.edu> Sent: Sunday, June 4, 2017 6:26 PM To: sldavid; James Hazard; Adrian Gropper Cc: wg-uma@kantarainitiative.org; eve.maler@forgerock.com; hardjono@media.mit.edu Subject: RE: [WG-UMA] New paper on Identity, Data and next-gen Federation Scott, Very insightful, as usual :-)
query whether the AI and bots "use" the smart contracts or, instead, do the smart contracts actually "compose/comprise" the distributed AI (and "good bot") system? In other words, will smart contracts be the "synapses" of AI/SI systems?
It could be. Also, just as they are different hierarchies and importance-levels to biological synapses, these could be different levels or hierarchies of smart-contracts. That is, smart-contracts (bots) to manage other smart-contracts (bots). What might be different with smart-contracts (at least the way we understand it and define today) is that (a) all node must have equal access to executing a smart-contract (ie. each node can load the same smart-contract) and (b) some consensus agreement protocol is used to arrive a stable state (agreed state) among the nodes executing the smart-contract. I don't know if biological "synapses" use such mechanisms (i.e. actually doing agreement among "synapses").
Stated otherwise, are standard smart contracts a way of hybridizing AI with SI (see below on "SI" concept), where the synthesis of multiple instantiations of the identical smart contracts across nodes (a la "distributed ledger") enables a form of suppression (self-regulation?) of errant smart contract application that might reflect intentional or accidental out-of-bounds activity on a network?
So the logic to effect suppression or self-regulation must be part the consensus protocol among the nodes. One thing I'm interested in looking at is "chained" or cascading smart-contracts (where sc-A triggers sc-B and then C and so on) -- and the various possible deadlocks or loops it can create. Recall that one of the challenges of the early internet was coming with loop-free or loop-detecting routing protocols (thinks of the IS-IS protocol and the OSPF protocol for local routing). /thomas/ ________________________________________ From: sldavid [sldavid@uw.edu] Sent: Sunday, June 04, 2017 12:53 PM To: Thomas Hardjono; James Hazard; Adrian Gropper Cc: wg-uma@kantarainitiative.org; eve.maler@forgerock.com; hardjono@media.mit.edu Subject: Re: [WG-UMA] New paper on Identity, Data and next-gen Federation HI folks - I am coming into this particular thread late, so I may be missing the initial point. Begging folks' indulgence if this is a "fork"(I hope it is not) - on the prior couple of points, query whether the AI and bots "use" the smart contracts or, instead, do the smart contracts actually "compose/comprise" the distributed AI (and "good bot") system? In other words, will smart contracts be the "synapses" of AI/SI systems? Stated otherwise, are standard smart contracts a way of hybridizing AI with SI (see below on "SI" concept), where the synthesis of multiple instantiations of the identical smart contracts across nodes (a la "distributed ledger") enables a form of suppression (self-regulation?) of errant smart contract application that might reflect intentional or accidental out-of-bounds activity on a network? Might standard contracts be used to establish a form of system "self" (albeit an intangible form of system "self") that can distinguish system "other" (like learned immunity systems), and police (and enforce) terms to reify the "self" that is reflective of the optimal system function (at least optimal as programmed in the smart contracts!) Are P2P networks a form of "SI" (Synthetic Intelligence) that is synthesized into a form of system intelligence that is greater than the sum of its parts, and which reflects an additional potential architecture for distributed AI? Part of our challenge in these sorts of distributed intelligence settings is , from a graph theory perspective, to get each "node" to be singing off the same song sheet. When those nodes are people, narratives (such as from religion, politics, social norms, economic incentives, ethical constructions, etc.) are the "song sheet/programming" of the nodes. When those nodes are commercial organizations, then economic goals (as required by their respective state laws of formation, articles, bylaws and contracts) and regulatory constraints are the "song sheet/programming" of the nodes. As an aside, a fundamental challenge of the Internet is that it is a shared infrastructure among groups that are singing off different song sheets. For example, when people are pursuing advice online about a social/emotional challenge (for example), they are also (below their awareness) sharing the communication space (and the communication) with commercial actors which (under current TOU/TOS arrangements, etc.) are able to access their communication and hence glean insight into that person's behavior/motivations and sell them some "placebo/face cream/whatever" that is offered to help fill the voids (both epidermic and existential, etc.) which fulfills the commercial goal of generating revenue, etc. Smart contracts will provide a mechanism for us to calibrate, normalize (and analyze, and improve) the balance of "information arbitrage" that is currently skewed in these shared spaces. Those imbalances are currently an artifact of yesterday's power relationships and its technical and legal embodiment in the Internet. Based on the foregoing observations/quasi-rant, it seems likely that smart contracts will find initial deployment in the simplest (lowest risk) contract settings - those with few input variables (called "conditions" in law) and applied in innocuous settings. For example, is a useful model of these "distributed smart contract" deployments the old (distributed) mechanical parking meter system, where each parking meter is built/programmed to register a certain amount of time for a given payment, and is constrained in its pre-programmed flexibility. The parking meter system, in gross, represents a form of mechanistic "neighborhood watch," but not against various crimes, but only the specific violation of unpaid parking. Simple, few variables, innocuous - reliable. Standard contracts (such as the fedex(tm) standard shipping terms, or the standard forms of PCI-DSS conformant credit card terms) create an intrinsic distributed (albeit intangible) risk-sharing topology that is reified with the serial agreements of its participants. In other words, every contracting party becomes an interested party in the reliability of the system when they sign up and hence depend on that system. This is particularly so in executory contract settings, or those with minimal signing ceremonies (i.e., "click to accept"), the recruitment of participants into the constraints of the agreement is made as simple as possible (so as not to distract the raw system recruit from their enthusiastic information-seeking behavior!) All of this is intended to convey the suggestion that there are strategies that can help to address the (appropriately) perceived dangers of "smart contracts" being enforced in inappropriate settings, or despite the failure of party expectations. Settings in which "Dumb contracts" (like mechanical parking meters, shipping agreements, etc.,) are already successfully applied at large scales may provide favorable piloting circumstances. The constraint of the system is a source of security, since it means that a parking meter cannot, for example, be accidentally or intentionally reprogrammed to do less innocuous things. Can smart contracts be "hardened" against such reprogramming? Perhaps (in the alternative) the immediate "externality" of the smart contract can detect and dampen functional drift of a given smart contract? One similar idea that we have been fussing with in some security discussions is a P2P "neighborhood watch" AMONG IoT devices. Distributed solutions for distributed challenges. But I digress. . . Kind regards, Scott Scott L. David Director of Policy Center for Information Assurance and Cybersecurity University of Washington - Applied Physics Laboratory w- 206-897-1466 m- 206-715-0859 Tw - @ScottLDavid ________________________________ From: wg-uma-bounces@kantarainitiative.org <wg-uma-bounces@kantarainitiative.org> on behalf of Thomas Hardjono <hardjono@mit.edu> Sent: Sunday, June 4, 2017 7:07 AM To: James Hazard; Adrian Gropper Cc: wg-uma@kantarainitiative.org; eve.maler@forgerock.com; hardjono@media.mit.edu Subject: Re: [WG-UMA] New paper on Identity, Data and next-gen Federation Thanks Jim, So in the paper I purposely omitted any mention of smart-contracts (too distracting). We have a small project on how to make the "algorithm" (think simple SQL statement) into a smart-contract (think Ethreum). The algorithm-smart-contract is triggered by the caller (querier) and it has to be parameterized (e.g. input the public keys of the querier and the data-repository; payments, etc). So this is pointing towards a future model for data-markets, where these algorithm-smart-contracts are available on many node of the P2P network, and anyone can use them (with payment of course). Not to be too hyperbolic, but think of futuristic "AI and bots" that make use of these various algorithm-smart-contracts. /thomas/ ________________________________________ From: wg-uma-bounces@kantarainitiative.org [wg-uma-bounces@kantarainitiative.org] on behalf of James Hazard [james.g.hazard@gmail.com] Sent: Sunday, June 04, 2017 9:31 AM To: Adrian Gropper Cc: wg-uma@kantarainitiative.org; eve.maler@forgerock.com; hardjono@media.mit.edu Subject: Re: [WG-UMA] New paper on Identity, Data and next-gen Federation Great to see this discussion. Some time ago, I did a demo of the sequence of events in writing and clearing a paper check - right at the boundary between a contract and a payment. It shows each step as a record that references other records. Some of the other records define the meaning of a step, in both text and automation. The automation is expressed in (fake) granular bits of code, referenced by their hash. This would allow curation of granular bits of automation ("smart contracts" in a broad sense). Those could be validated by an organization or standards body. The demo was made with the pending EU PSD2 in mind, as a way for financial institutions to collaborate on APIs. But the principle is broadly applicable to transacting in general. http://www.commonaccord.org/index.php?action=doc&file=bqc/fr/bnpp/a5we/Account/Check/00001/06-Accept.md (Click on "Source" and follow links.) On Sun, Jun 4, 2017 at 5:33 AM, Adrian Gropper <agropper@healthurl.com<mailto:agropper@healthurl.com>> wrote: Please, let's avoid applying the word farming to people. The Matrix will be upon us soon enough. Adrian On Sun, Jun 4, 2017 at 8:24 AM, Mark Lizar <mark@openconsent.com<mailto:mark@openconsent.com>> wrote: Trust Farmer, what a great term ! Use of RAW personal data is clearly a barrier for trusted service development and this makes a lot of sense. OPAL provides an economic, high value information argument. It also helps to illuminate a landscape for competitive service development with personal data that people control or co-manage. (Which is what I like the most:) - Mark
On 4 Jun 2017, at 02:44, Thomas Hardjono <hardjono@mit.edu<mailto:hardjono@mit.edu>> wrote:
Thanks Mark,
An easy way to illustrate the "algorithm" is to think of an SQL statement (e.g. "compute average income of people living in Cambridge MA"). I send you the SQL statement, then you compute it in your back-end data repo (behind you firewalls), and then return the result to me.
Assuming a community of Data Providers could get into a consortium governed by a trust farmer, the could collectively come-up with say 20 of these SQL queries (vetted of course).
The point of the paper is that the barrier to sharing data (raw data) is getting impossible to overcome (think GDPR), and if data-rich institutions (i.e. Banks) want to play in the identity space by monetizing their data then OPAL provides a practical/palatable approach.
From the consent side, the user needs the ability to say: "I know my data is part of data-repo-X, and I give consent for algorithm A to be executed on data-repo-X".
The data-repository also needs a recipe to prove the use had given consent.
/thomas/
________________________________________ From: Mark Lizar [mark@openconsent.com<mailto:mark@openconsent.com>] Sent: Saturday, June 03, 2017 4:09 PM To: Thomas Hardjono Cc: wg-uma@kantarainitiative.org<mailto:wg-uma@kantarainitiative.org>; eve.maler@forgerock.com<mailto:eve.maler@forgerock.com>; hardjono@media.mit.edu<mailto:hardjono@media.mit.edu> Subject: Re: New paper on Identity, Data and next-gen Federation
Hi Thomas,
You made quick work of this wickedly hard problem :-). (To run with this a bit)
This looks a lot like the consent to authorise pattern we have been discussing. Which I would define as the :
1. - purpose specification 2. - to consent permission scopes 3. - to privacy policy model clauses 4. - to UMA 5. - to contract policy clauses 6. - to permission 7. - to user control
To make this sort of thing happen I have been working on the premise that a machine readable privacy policy, is configured with a purpose category that is defined with preference scopes (or a consent type that defined scopes and maybe also preference options) which then are associated with model privacy policy clauses.
This then boils down into a consent to authorise privacy policy scope profile for UMA access, which would then be used to defines the permission scopes and the associated contract model clauses that enable people to manage and control their own information.
At which point, the data subject could bring to the party their own license, which provides the model clauses, which match the aforementioned policies and defines how the preferences are set and managed.
The whole policy model will link with the permission scopes and preferences to basically sort out all the old school policy issues that are gumming up the works currently.
With the above framework in place,
The algorithms could be defined by the purpose category (i.e. industry) configured by the consent to authorise profile, and then controlled by the individual with model clauses that delegate to trusted third party applications. This provides the higher order transparency and accountability needed - or perhaps ethics - which the user is ultimately the master controller of via a data services provider.
It is conceivable that the user could bring their own algorithims, or have algorithims that police algorithims which is reminiscent of the original cop monkey pattern (if I am not mistaken)
- Mark
On 2 Jun 2017, at 16:03, Thomas Hardjono <hardjono@mit.edu<mailto:hardjono@mit.edu><mailto:hardjono@mit.edu<mailto:hardjono@mit.edu>>> wrote:
Eve, Mark, UMA folks,
This new paper (PDF) might be of some use in framing-up the next level of discussions regarding "identity" and "data" and how to"federate data".
Its permanent link is here:
http://arxiv.org/abs/1705.10880
I'm thinking that the Claims-gathering flows in UMA and also the Consent-Receipts flows could use an "algorithm-identifier" value, effectively stating that "Alice consents to Algorithms X be run against her data set Y" (where the data-set lies in her private Resource Server).
Best.
/thomas/ <open-algorithms-identity-federation-1705.10880.pdf>
_______________________________________________ WG-UMA mailing list WG-UMA@kantarainitiative.org<mailto:WG-UMA@kantarainitiative.org> http://kantarainitiative.org/mailman/listinfo/wg-uma WG-UMA -- This is the User Managed Access WG mail list.<http://kantarainitiative.org/mailman/listinfo/wg-uma> kantarainitiative.org This is the Kantara Initiative User Managed Access Work Group mail list info page. List Details: - This mail list operates subject to the Kantara Initiative IPR ... WG-UMA -- This is the User Managed Access WG mail list.<http://kantarainitiative.org/mailman/listinfo/wg-uma> WG-UMA -- This is the User Managed Access WG mail list.<http://kantarainitiative.org/mailman/listinfo/wg-uma> kantarainitiative.org This is the Kantara Initiative User Managed Access Work Group mail list info page. List Details: - This mail list operates subject to the Kantara Initiative IPR ... kantarainitiative.org This is the Kantara Initiative User Managed Access Work Group mail list info page. List Details: - This mail list operates subject to the Kantara Initiative IPR ... -- Adrian Gropper MD PROTECT YOUR FUTURE - RESTORE Health Privacy! HELP us fight for the right to control personal health data. _______________________________________________ WG-UMA mailing list WG-UMA@kantarainitiative.org<mailto:WG-UMA@kantarainitiative.org> http://kantarainitiative.org/mailman/listinfo/wg-uma WG-UMA -- This is the User Managed Access WG mail list.<http://kantarainitiative.org/mailman/listinfo/wg-uma> kantarainitiative.org This is the Kantara Initiative User Managed Access Work Group mail list info page. List Details: - This mail list operates subject to the Kantara Initiative IPR ... WG-UMA -- This is the User Managed Access WG mail list.<http://kantarainitiative.org/mailman/listinfo/wg-uma> WG-UMA -- This is the User Managed Access WG mail list.<http://kantarainitiative.org/mailman/listinfo/wg-uma> kantarainitiative.org This is the Kantara Initiative User Managed Access Work Group mail list info page. List Details: - This mail list operates subject to the Kantara Initiative IPR ... kantarainitiative.org This is the Kantara Initiative User Managed Access Work Group mail list info page. List Details: - This mail list operates subject to the Kantara Initiative IPR ... -- @commonaccord _______________________________________________ WG-UMA mailing list WG-UMA@kantarainitiative.org http://kantarainitiative.org/mailman/listinfo/wg-uma WG-UMA -- This is the User Managed Access WG mail list.<http://kantarainitiative.org/mailman/listinfo/wg-uma> kantarainitiative.org This is the Kantara Initiative User Managed Access Work Group mail list info page. List Details: - This mail list operates subject to the Kantara Initiative IPR ... WG-UMA -- This is the User Managed Access WG mail list.<http://kantarainitiative.org/mailman/listinfo/wg-uma> WG-UMA -- This is the User Managed Access WG mail list.<http://kantarainitiative.org/mailman/listinfo/wg-uma> kantarainitiative.org This is the Kantara Initiative User Managed Access Work Group mail list info page. List Details: - This mail list operates subject to the Kantara Initiative IPR ... kantarainitiative.org This is the Kantara Initiative User Managed Access Work Group mail list info page. List Details: - This mail list operates subject to the Kantara Initiative IPR ...

Thomas; It seems that part of the conceptual underpinning of this is that there will be pools of what you call “RAW data” under the control of one entity or another. Presumably, given GDPR, these pools will derive authority from consent or an allowable derogation. To the extent that we want to build privacy protective systems on top of a non-privacy protective infrastructure this makes sense and is a step away from the risks and abuses we have all seen. That being said, I wonder what the potentials are for pools of algorithms instead of pools of data. Such algorithms could make use of individuals’ data in situ as it were - perhaps querying resource servers using UMA or by linking particular algorithms to dynamically negotiated information sharing agreements. In both of these cases there is no need of a trust entity because control of the data is retained by the individual. It’s a different category of algorithmic problem, but it does seem to me to scale in a manner similar to the Internet itself and it bypasses the risk endemic in creating yoodge pools of RAW data. Sincerely, John Wunderlich (@PrivacyCDN) <http://privacybydesign.ca/> <http://privacybydesign.ca/> <http://privacybydesign.ca/>Privacist & PbD Ambassador <http://privacybydesign.ca/>
On Jun 4, 2017, at 10:07, Thomas Hardjono <hardjono@mit.edu> wrote:
Thanks Jim,
So in the paper I purposely omitted any mention of smart-contracts (too distracting).
We have a small project on how to make the "algorithm" (think simple SQL statement) into a smart-contract (think Ethreum).
The algorithm-smart-contract is triggered by the caller (querier) and it has to be parameterized (e.g. input the public keys of the querier and the data-repository; payments, etc).
So this is pointing towards a future model for data-markets, where these algorithm-smart-contracts are available on many node of the P2P network, and anyone can use them (with payment of course).
Not to be too hyperbolic, but think of futuristic "AI and bots" that make use of these various algorithm-smart-contracts.
/thomas/
________________________________________ From: wg-uma-bounces@kantarainitiative.org [wg-uma-bounces@kantarainitiative.org] on behalf of James Hazard [james.g.hazard@gmail.com] Sent: Sunday, June 04, 2017 9:31 AM To: Adrian Gropper Cc: wg-uma@kantarainitiative.org; eve.maler@forgerock.com; hardjono@media.mit.edu Subject: Re: [WG-UMA] New paper on Identity, Data and next-gen Federation
Great to see this discussion.
Some time ago, I did a demo of the sequence of events in writing and clearing a paper check - right at the boundary between a contract and a payment. It shows each step as a record that references other records. Some of the other records define the meaning of a step, in both text and automation. The automation is expressed in (fake) granular bits of code, referenced by their hash.
This would allow curation of granular bits of automation ("smart contracts" in a broad sense). Those could be validated by an organization or standards body.
The demo was made with the pending EU PSD2 in mind, as a way for financial institutions to collaborate on APIs. But the principle is broadly applicable to transacting in general.
http://www.commonaccord.org/index.php?action=doc&file=bqc/fr/bnpp/a5we/Account/Check/00001/06-Accept.md (Click on "Source" and follow links.)
On Sun, Jun 4, 2017 at 5:33 AM, Adrian Gropper <agropper@healthurl.com<mailto:agropper@healthurl.com>> wrote: Please, let's avoid applying the word farming to people. The Matrix will be upon us soon enough.
Adrian
On Sun, Jun 4, 2017 at 8:24 AM, Mark Lizar <mark@openconsent.com<mailto:mark@openconsent.com>> wrote: Trust Farmer, what a great term !
Use of RAW personal data is clearly a barrier for trusted service development and this makes a lot of sense.
OPAL provides an economic, high value information argument. It also helps to illuminate a landscape for competitive service development with personal data that people control or co-manage. (Which is what I like the most:)
- Mark
On 4 Jun 2017, at 02:44, Thomas Hardjono <hardjono@mit.edu<mailto:hardjono@mit.edu>> wrote:
Thanks Mark,
An easy way to illustrate the "algorithm" is to think of an SQL statement (e.g. "compute average income of people living in Cambridge MA"). I send you the SQL statement, then you compute it in your back-end data repo (behind you firewalls), and then return the result to me.
Assuming a community of Data Providers could get into a consortium governed by a trust farmer, the could collectively come-up with say 20 of these SQL queries (vetted of course).
The point of the paper is that the barrier to sharing data (raw data) is getting impossible to overcome (think GDPR), and if data-rich institutions (i.e. Banks) want to play in the identity space by monetizing their data then OPAL provides a practical/palatable approach.
From the consent side, the user needs the ability to say: "I know my data is part of data-repo-X, and I give consent for algorithm A to be executed on data-repo-X".
The data-repository also needs a recipe to prove the use had given consent.
/thomas/
________________________________________ From: Mark Lizar [mark@openconsent.com<mailto:mark@openconsent.com>] Sent: Saturday, June 03, 2017 4:09 PM To: Thomas Hardjono Cc: wg-uma@kantarainitiative.org<mailto:wg-uma@kantarainitiative.org>; eve.maler@forgerock.com<mailto:eve.maler@forgerock.com>; hardjono@media.mit.edu<mailto:hardjono@media.mit.edu> Subject: Re: New paper on Identity, Data and next-gen Federation
Hi Thomas,
You made quick work of this wickedly hard problem :-). (To run with this a bit)
This looks a lot like the consent to authorise pattern we have been discussing. Which I would define as the :
1. - purpose specification 2. - to consent permission scopes 3. - to privacy policy model clauses 4. - to UMA 5. - to contract policy clauses 6. - to permission 7. - to user control
To make this sort of thing happen I have been working on the premise that a machine readable privacy policy, is configured with a purpose category that is defined with preference scopes (or a consent type that defined scopes and maybe also preference options) which then are associated with model privacy policy clauses.
This then boils down into a consent to authorise privacy policy scope profile for UMA access, which would then be used to defines the permission scopes and the associated contract model clauses that enable people to manage and control their own information.
At which point, the data subject could bring to the party their own license, which provides the model clauses, which match the aforementioned policies and defines how the preferences are set and managed.
The whole policy model will link with the permission scopes and preferences to basically sort out all the old school policy issues that are gumming up the works currently.
With the above framework in place,
The algorithms could be defined by the purpose category (i.e. industry) configured by the consent to authorise profile, and then controlled by the individual with model clauses that delegate to trusted third party applications. This provides the higher order transparency and accountability needed - or perhaps ethics - which the user is ultimately the master controller of via a data services provider.
It is conceivable that the user could bring their own algorithims, or have algorithims that police algorithims which is reminiscent of the original cop monkey pattern (if I am not mistaken)
- Mark
On 2 Jun 2017, at 16:03, Thomas Hardjono <hardjono@mit.edu<mailto:hardjono@mit.edu><mailto:hardjono@mit.edu<mailto:hardjono@mit.edu>>> wrote:
Eve, Mark, UMA folks,
This new paper (PDF) might be of some use in framing-up the next level of discussions regarding "identity" and "data" and how to"federate data".
Its permanent link is here:
http://arxiv.org/abs/1705.10880
I'm thinking that the Claims-gathering flows in UMA and also the Consent-Receipts flows could use an "algorithm-identifier" value, effectively stating that "Alice consents to Algorithms X be run against her data set Y" (where the data-set lies in her private Resource Server).
Best.
/thomas/ <open-algorithms-identity-federation-1705.10880.pdf>
_______________________________________________ WG-UMA mailing list WG-UMA@kantarainitiative.org<mailto:WG-UMA@kantarainitiative.org> http://kantarainitiative.org/mailman/listinfo/wg-uma
--
Adrian Gropper MD
PROTECT YOUR FUTURE - RESTORE Health Privacy! HELP us fight for the right to control personal health data.
_______________________________________________ WG-UMA mailing list WG-UMA@kantarainitiative.org<mailto:WG-UMA@kantarainitiative.org> http://kantarainitiative.org/mailman/listinfo/wg-uma
-- @commonaccord _______________________________________________ WG-UMA mailing list WG-UMA@kantarainitiative.org http://kantarainitiative.org/mailman/listinfo/wg-uma
-- This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify the system manager. This message contains confidential information and is intended only for the individual named. If you are not the named addressee you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately by e-mail if you have received this e-mail by mistake and delete this e-mail from your system. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited.

+1 to Doc's observations about surveillance capitalism and agency. My impression is that our design for "the web we want" (a tag line for the diglife collaborative) needs to go beyond tools for strict agency (e.g. UMA and Bitcoin) to tools that will actively inject noise and plausible deniability into the AI surveillance systems. Anonymity is not enough when the ISPs and other critical communications infrastructure are designed for involuntary surveillance through traffic analysis. In a post-Snowden, post Google world self-sovereign identity will be as much about game theory as Bitcoin already is. Active countermeasures will be part of next-gen identity. Adrian On Sun, Jun 4, 2017 at 2:15 PM John Wunderlich <john@wunderlich.ca> wrote:
Thomas;
It seems that part of the conceptual underpinning of this is that there will be pools of what you call “RAW data” under the control of one entity or another. Presumably, given GDPR, these pools will derive authority from consent or an allowable derogation. To the extent that we want to build privacy protective systems on top of a non-privacy protective infrastructure this makes sense and is a step away from the risks and abuses we have all seen.
That being said, I wonder what the potentials are for pools of algorithms instead of pools of data. Such algorithms could make use of individuals’ data in situ as it were - perhaps querying resource servers using UMA or by linking particular algorithms to dynamically negotiated information sharing agreements. In both of these cases there is no need of a trust entity because control of the data is retained by the individual. It’s a different category of algorithmic problem, but it does seem to me to scale in a manner similar to the Internet itself and it bypasses the risk endemic in creating yoodge pools of RAW data.
Sincerely, *John Wunderlich* *(@PrivacyCDN)*
<http://privacybydesign.ca> <http://privacybydesign.ca>
<http://privacybydesign.ca>Privacist & PbD Ambassador <http://privacybydesign.ca>
On Jun 4, 2017, at 10:07, Thomas Hardjono <hardjono@mit.edu> wrote:
Thanks Jim,
So in the paper I purposely omitted any mention of smart-contracts (too distracting).
We have a small project on how to make the "algorithm" (think simple SQL statement) into a smart-contract (think Ethreum).
The algorithm-smart-contract is triggered by the caller (querier) and it has to be parameterized (e.g. input the public keys of the querier and the data-repository; payments, etc).
So this is pointing towards a future model for data-markets, where these algorithm-smart-contracts are available on many node of the P2P network, and anyone can use them (with payment of course).
Not to be too hyperbolic, but think of futuristic "AI and bots" that make use of these various algorithm-smart-contracts.
/thomas/
________________________________________ From: wg-uma-bounces@kantarainitiative.org [ wg-uma-bounces@kantarainitiative.org] on behalf of James Hazard [ james.g.hazard@gmail.com] Sent: Sunday, June 04, 2017 9:31 AM To: Adrian Gropper Cc: wg-uma@kantarainitiative.org; eve.maler@forgerock.com; hardjono@media.mit.edu Subject: Re: [WG-UMA] New paper on Identity, Data and next-gen Federation
Great to see this discussion.
Some time ago, I did a demo of the sequence of events in writing and clearing a paper check - right at the boundary between a contract and a payment. It shows each step as a record that references other records. Some of the other records define the meaning of a step, in both text and automation. The automation is expressed in (fake) granular bits of code, referenced by their hash.
This would allow curation of granular bits of automation ("smart contracts" in a broad sense). Those could be validated by an organization or standards body.
The demo was made with the pending EU PSD2 in mind, as a way for financial institutions to collaborate on APIs. But the principle is broadly applicable to transacting in general.
http://www.commonaccord.org/index.php?action=doc&file=bqc/fr/bnpp/a5we/Account/Check/00001/06-Accept.md (Click on "Source" and follow links.)
On Sun, Jun 4, 2017 at 5:33 AM, Adrian Gropper <agropper@healthurl.com <mailto:agropper@healthurl.com>> wrote: Please, let's avoid applying the word farming to people. The Matrix will be upon us soon enough.
Adrian
On Sun, Jun 4, 2017 at 8:24 AM, Mark Lizar <mark@openconsent.com<mailto: mark@openconsent.com>> wrote: Trust Farmer, what a great term !
Use of RAW personal data is clearly a barrier for trusted service development and this makes a lot of sense.
OPAL provides an economic, high value information argument. It also helps to illuminate a landscape for competitive service development with personal data that people control or co-manage. (Which is what I like the most:)
- Mark
On 4 Jun 2017, at 02:44, Thomas Hardjono <hardjono@mit.edu<mailto: hardjono@mit.edu>> wrote:
Thanks Mark,
An easy way to illustrate the "algorithm" is to think of an SQL statement (e.g. "compute average income of people living in Cambridge MA"). I send you the SQL statement, then you compute it in your back-end data repo (behind you firewalls), and then return the result to me.
Assuming a community of Data Providers could get into a consortium governed by a trust farmer, the could collectively come-up with say 20 of these SQL queries (vetted of course).
The point of the paper is that the barrier to sharing data (raw data) is getting impossible to overcome (think GDPR), and if data-rich institutions (i.e. Banks) want to play in the identity space by monetizing their data then OPAL provides a practical/palatable approach.
From the consent side, the user needs the ability to say: "I know my data is part of data-repo-X, and I give consent for algorithm A to be executed on data-repo-X".
The data-repository also needs a recipe to prove the use had given consent.
/thomas/
________________________________________ From: Mark Lizar [mark@openconsent.com<mailto:mark@openconsent.com>] Sent: Saturday, June 03, 2017 4:09 PM To: Thomas Hardjono Cc: wg-uma@kantarainitiative.org<mailto:wg-uma@kantarainitiative.org>; eve.maler@forgerock.com<mailto:eve.maler@forgerock.com>; hardjono@media.mit.edu<mailto:hardjono@media.mit.edu> Subject: Re: New paper on Identity, Data and next-gen Federation
Hi Thomas,
You made quick work of this wickedly hard problem :-). (To run with this a bit)
This looks a lot like the consent to authorise pattern we have been discussing. Which I would define as the :
1. - purpose specification 2. - to consent permission scopes 3. - to privacy policy model clauses 4. - to UMA 5. - to contract policy clauses 6. - to permission 7. - to user control
To make this sort of thing happen I have been working on the premise that a machine readable privacy policy, is configured with a purpose category that is defined with preference scopes (or a consent type that defined scopes and maybe also preference options) which then are associated with model privacy policy clauses.
This then boils down into a consent to authorise privacy policy scope profile for UMA access, which would then be used to defines the permission scopes and the associated contract model clauses that enable people to manage and control their own information.
At which point, the data subject could bring to the party their own license, which provides the model clauses, which match the aforementioned policies and defines how the preferences are set and managed.
The whole policy model will link with the permission scopes and preferences to basically sort out all the old school policy issues that are gumming up the works currently.
With the above framework in place,
The algorithms could be defined by the purpose category (i.e. industry) configured by the consent to authorise profile, and then controlled by the individual with model clauses that delegate to trusted third party applications. This provides the higher order transparency and accountability needed - or perhaps ethics - which the user is ultimately the master controller of via a data services provider.
It is conceivable that the user could bring their own algorithims, or have algorithims that police algorithims which is reminiscent of the original cop monkey pattern (if I am not mistaken)
- Mark
On 2 Jun 2017, at 16:03, Thomas Hardjono <hardjono@mit.edu<mailto: hardjono@mit.edu><mailto:hardjono@mit.edu<mailto:hardjono@mit.edu>>> wrote:
Eve, Mark, UMA folks,
This new paper (PDF) might be of some use in framing-up the next level of discussions regarding "identity" and "data" and how to"federate data".
Its permanent link is here:
http://arxiv.org/abs/1705.10880
I'm thinking that the Claims-gathering flows in UMA and also the Consent-Receipts flows could use an "algorithm-identifier" value, effectively stating that "Alice consents to Algorithms X be run against her data set Y" (where the data-set lies in her private Resource Server).
Best.
/thomas/ <open-algorithms-identity-federation-1705.10880.pdf>
_______________________________________________ WG-UMA mailing list WG-UMA@kantarainitiative.org<mailto:WG-UMA@kantarainitiative.org> http://kantarainitiative.org/mailman/listinfo/wg-uma
--
Adrian Gropper MD
PROTECT YOUR FUTURE - RESTORE Health Privacy! HELP us fight for the right to control personal health data.
_______________________________________________ WG-UMA mailing list WG-UMA@kantarainitiative.org<mailto:WG-UMA@kantarainitiative.org> http://kantarainitiative.org/mailman/listinfo/wg-uma
-- @commonaccord _______________________________________________ WG-UMA mailing list WG-UMA@kantarainitiative.org
http://kantarainitiative.org/mailman/listinfo/wg-uma
This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify the system manager. This message contains confidential information and is intended only for the individual named. If you are not the named addressee you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately by e-mail if you have received this e-mail by mistake and delete this e-mail from your system. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited.
-- Adrian Gropper MD PROTECT YOUR FUTURE - RESTORE Health Privacy! HELP us fight for the right to control personal health data.

To Mark's comment regarding machine readable privacy policies, I did develop the high level design for a system to analyze privacy policies (as well as Terms of Service and other "standard" contracts). A user of the system would define his privacy preferences in a template. Using Information Extraction (a form of AI), the system then reviewed the Privacy Policy and interpreted its meaning vis a vis each user preference. It then reported the disparities. The user was not bound to enforce his preferences. The idea was to begin letting people know how egregious are many of the policies that they agree to without reading as a first step in trying to create a competitive market for such policies. Knowing the one site has a policy preferable to a similar site might begin to drive firms to create more appealing policies. But, as long as we remain in the dark about what we are signing up for, there is limited incentive for sites to improve. To Scott's comment about preventing the alteration of contract terms after they are agreed to, one possibility is to add the agreed-to contract to a blockchain. In this way, any alteration other than a mutually agreed-to amendment would be outed by the consensus mechanism builtin to the blockchain. Jeff --------------------------------- Jeff Stollman stollman.j@gmail.com +1 202.683.8699 <stollman.j@gmail.com> Truth never triumphs — its opponents just die out. Science advances one funeral at a time. Max Planck On Sun, Jun 4, 2017 at 2:15 PM, John Wunderlich <john@wunderlich.ca> wrote:
Thomas;
It seems that part of the conceptual underpinning of this is that there will be pools of what you call “RAW data” under the control of one entity or another. Presumably, given GDPR, these pools will derive authority from consent or an allowable derogation. To the extent that we want to build privacy protective systems on top of a non-privacy protective infrastructure this makes sense and is a step away from the risks and abuses we have all seen.
That being said, I wonder what the potentials are for pools of algorithms instead of pools of data. Such algorithms could make use of individuals’ data in situ as it were - perhaps querying resource servers using UMA or by linking particular algorithms to dynamically negotiated information sharing agreements. In both of these cases there is no need of a trust entity because control of the data is retained by the individual. It’s a different category of algorithmic problem, but it does seem to me to scale in a manner similar to the Internet itself and it bypasses the risk endemic in creating yoodge pools of RAW data.
Sincerely, *John Wunderlich* *(@PrivacyCDN)*
<http://privacybydesign.ca> <http://privacybydesign.ca>
<http://privacybydesign.ca>Privacist & PbD Ambassador <http://privacybydesign.ca>
On Jun 4, 2017, at 10:07, Thomas Hardjono <hardjono@mit.edu> wrote:
Thanks Jim,
So in the paper I purposely omitted any mention of smart-contracts (too distracting).
We have a small project on how to make the "algorithm" (think simple SQL statement) into a smart-contract (think Ethreum).
The algorithm-smart-contract is triggered by the caller (querier) and it has to be parameterized (e.g. input the public keys of the querier and the data-repository; payments, etc).
So this is pointing towards a future model for data-markets, where these algorithm-smart-contracts are available on many node of the P2P network, and anyone can use them (with payment of course).
Not to be too hyperbolic, but think of futuristic "AI and bots" that make use of these various algorithm-smart-contracts.
/thomas/
________________________________________ From: wg-uma-bounces@kantarainitiative.org [wg-uma-bounces@ kantarainitiative.org] on behalf of James Hazard [james.g.hazard@gmail.com ] Sent: Sunday, June 04, 2017 9:31 AM To: Adrian Gropper Cc: wg-uma@kantarainitiative.org; eve.maler@forgerock.com; hardjono@media.mit.edu Subject: Re: [WG-UMA] New paper on Identity, Data and next-gen Federation
Great to see this discussion.
Some time ago, I did a demo of the sequence of events in writing and clearing a paper check - right at the boundary between a contract and a payment. It shows each step as a record that references other records. Some of the other records define the meaning of a step, in both text and automation. The automation is expressed in (fake) granular bits of code, referenced by their hash.
This would allow curation of granular bits of automation ("smart contracts" in a broad sense). Those could be validated by an organization or standards body.
The demo was made with the pending EU PSD2 in mind, as a way for financial institutions to collaborate on APIs. But the principle is broadly applicable to transacting in general.
http://www.commonaccord.org/index.php?action=doc&file=bqc/ fr/bnpp/a5we/Account/Check/00001/06-Accept.md (Click on "Source" and follow links.)
On Sun, Jun 4, 2017 at 5:33 AM, Adrian Gropper <agropper@healthurl.com< mailto:agropper@healthurl.com>> wrote: Please, let's avoid applying the word farming to people. The Matrix will be upon us soon enough.
Adrian
On Sun, Jun 4, 2017 at 8:24 AM, Mark Lizar <mark@openconsent.com<mailto:m ark@openconsent.com>> wrote: Trust Farmer, what a great term !
Use of RAW personal data is clearly a barrier for trusted service development and this makes a lot of sense.
OPAL provides an economic, high value information argument. It also helps to illuminate a landscape for competitive service development with personal data that people control or co-manage. (Which is what I like the most:)
- Mark
On 4 Jun 2017, at 02:44, Thomas Hardjono <hardjono@mit.edu<mailto:hardj ono@mit.edu>> wrote:
Thanks Mark,
An easy way to illustrate the "algorithm" is to think of an SQL statement (e.g. "compute average income of people living in Cambridge MA"). I send you the SQL statement, then you compute it in your back-end data repo (behind you firewalls), and then return the result to me.
Assuming a community of Data Providers could get into a consortium governed by a trust farmer, the could collectively come-up with say 20 of these SQL queries (vetted of course).
The point of the paper is that the barrier to sharing data (raw data) is getting impossible to overcome (think GDPR), and if data-rich institutions (i.e. Banks) want to play in the identity space by monetizing their data then OPAL provides a practical/palatable approach.
From the consent side, the user needs the ability to say: "I know my data is part of data-repo-X, and I give consent for algorithm A to be executed on data-repo-X".
The data-repository also needs a recipe to prove the use had given consent.
/thomas/
________________________________________ From: Mark Lizar [mark@openconsent.com<mailto:mark@openconsent.com>] Sent: Saturday, June 03, 2017 4:09 PM To: Thomas Hardjono Cc: wg-uma@kantarainitiative.org<mailto:wg-uma@kantarainitiative.org>; eve.maler@forgerock.com<mailto:eve.maler@forgerock.com>; hardjono@media.mit.edu<mailto:hardjono@media.mit.edu> Subject: Re: New paper on Identity, Data and next-gen Federation
Hi Thomas,
You made quick work of this wickedly hard problem :-). (To run with this a bit)
This looks a lot like the consent to authorise pattern we have been discussing. Which I would define as the :
1. - purpose specification 2. - to consent permission scopes 3. - to privacy policy model clauses 4. - to UMA 5. - to contract policy clauses 6. - to permission 7. - to user control
To make this sort of thing happen I have been working on the premise that a machine readable privacy policy, is configured with a purpose category that is defined with preference scopes (or a consent type that defined scopes and maybe also preference options) which then are associated with model privacy policy clauses.
This then boils down into a consent to authorise privacy policy scope profile for UMA access, which would then be used to defines the permission scopes and the associated contract model clauses that enable people to manage and control their own information.
At which point, the data subject could bring to the party their own license, which provides the model clauses, which match the aforementioned policies and defines how the preferences are set and managed.
The whole policy model will link with the permission scopes and preferences to basically sort out all the old school policy issues that are gumming up the works currently.
With the above framework in place,
The algorithms could be defined by the purpose category (i.e. industry) configured by the consent to authorise profile, and then controlled by the individual with model clauses that delegate to trusted third party applications. This provides the higher order transparency and accountability needed - or perhaps ethics - which the user is ultimately the master controller of via a data services provider.
It is conceivable that the user could bring their own algorithims, or have algorithims that police algorithims which is reminiscent of the original cop monkey pattern (if I am not mistaken)
- Mark
On 2 Jun 2017, at 16:03, Thomas Hardjono <hardjono@mit.edu<mailto:hardj ono@mit.edu><mailto:hardjono@mit.edu<mailto:hardjono@mit.edu>>> wrote:
Eve, Mark, UMA folks,
This new paper (PDF) might be of some use in framing-up the next level of discussions regarding "identity" and "data" and how to"federate data".
Its permanent link is here:
http://arxiv.org/abs/1705.10880
I'm thinking that the Claims-gathering flows in UMA and also the Consent-Receipts flows could use an "algorithm-identifier" value, effectively stating that "Alice consents to Algorithms X be run against her data set Y" (where the data-set lies in her private Resource Server).
Best.
/thomas/ <open-algorithms-identity-federation-1705.10880.pdf>
_______________________________________________ WG-UMA mailing list WG-UMA@kantarainitiative.org<mailto:WG-UMA@kantarainitiative.org> http://kantarainitiative.org/mailman/listinfo/wg-uma
--
Adrian Gropper MD
PROTECT YOUR FUTURE - RESTORE Health Privacy! HELP us fight for the right to control personal health data.
_______________________________________________ WG-UMA mailing list WG-UMA@kantarainitiative.org<mailto:WG-UMA@kantarainitiative.org> http://kantarainitiative.org/mailman/listinfo/wg-uma
-- @commonaccord _______________________________________________ WG-UMA mailing list WG-UMA@kantarainitiative.org http://kantarainitiative.org/mailman/listinfo/wg-uma
This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify the system manager. This message contains confidential information and is intended only for the individual named. If you are not the named addressee you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately by e-mail if you have received this e-mail by mistake and delete this e-mail from your system. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited.
_______________________________________________ WG-UMA mailing list WG-UMA@kantarainitiative.org http://kantarainitiative.org/mailman/listinfo/wg-uma

Could such a thing (or things) be located at Customer Commons as one arrow (or set of arrows) in a quiver of tools at the individual’s disposal? Doc
On Jun 4, 2017, at 4:36 PM, j stollman <stollman.j@gmail.com> wrote:
To Mark's comment regarding machine readable privacy policies, I did develop the high level design for a system to analyze privacy policies (as well as Terms of Service and other "standard" contracts). A user of the system would define his privacy preferences in a template. Using Information Extraction (a form of AI), the system then reviewed the Privacy Policy and interpreted its meaning vis a vis each user preference. It then reported the disparities. The user was not bound to enforce his preferences. The idea was to begin letting people know how egregious are many of the policies that they agree to without reading as a first step in trying to create a competitive market for such policies. Knowing the one site has a policy preferable to a similar site might begin to drive firms to create more appealing policies. But, as long as we remain in the dark about what we are signing up for, there is limited incentive for sites to improve.
To Scott's comment about preventing the alteration of contract terms after they are agreed to, one possibility is to add the agreed-to contract to a blockchain. In this way, any alteration other than a mutually agreed-to amendment would be outed by the consensus mechanism builtin to the blockchain.
Jeff
--------------------------------- Jeff Stollman stollman.j@gmail.com <mailto:stollman.j@gmail.com> +1 202.683.8699 <mailto:stollman.j@gmail.com>
Truth never triumphs — its opponents just die out. Science advances one funeral at a time. Max Planck
On Sun, Jun 4, 2017 at 2:15 PM, John Wunderlich <john@wunderlich.ca <mailto:john@wunderlich.ca>> wrote: Thomas;
It seems that part of the conceptual underpinning of this is that there will be pools of what you call “RAW data” under the control of one entity or another. Presumably, given GDPR, these pools will derive authority from consent or an allowable derogation. To the extent that we want to build privacy protective systems on top of a non-privacy protective infrastructure this makes sense and is a step away from the risks and abuses we have all seen.
That being said, I wonder what the potentials are for pools of algorithms instead of pools of data. Such algorithms could make use of individuals’ data in situ as it were - perhaps querying resource servers using UMA or by linking particular algorithms to dynamically negotiated information sharing agreements. In both of these cases there is no need of a trust entity because control of the data is retained by the individual. It’s a different category of algorithmic problem, but it does seem to me to scale in a manner similar to the Internet itself and it bypasses the risk endemic in creating yoodge pools of RAW data.
Sincerely, John Wunderlich (@PrivacyCDN)
<PastedGraphic-4.tiff> <http://privacybydesign.ca/> <http://privacybydesign.ca/>
<http://privacybydesign.ca/>Privacist & PbD Ambassador <http://privacybydesign.ca/>
On Jun 4, 2017, at 10:07, Thomas Hardjono <hardjono@mit.edu <mailto:hardjono@mit.edu>> wrote:
Thanks Jim,
So in the paper I purposely omitted any mention of smart-contracts (too distracting).
We have a small project on how to make the "algorithm" (think simple SQL statement) into a smart-contract (think Ethreum).
The algorithm-smart-contract is triggered by the caller (querier) and it has to be parameterized (e.g. input the public keys of the querier and the data-repository; payments, etc).
So this is pointing towards a future model for data-markets, where these algorithm-smart-contracts are available on many node of the P2P network, and anyone can use them (with payment of course).
Not to be too hyperbolic, but think of futuristic "AI and bots" that make use of these various algorithm-smart-contracts.
/thomas/
________________________________________ From: wg-uma-bounces@kantarainitiative.org <mailto:wg-uma-bounces@kantarainitiative.org> [wg-uma-bounces@kantarainitiative.org <mailto:wg-uma-bounces@kantarainitiative.org>] on behalf of James Hazard [james.g.hazard@gmail.com <mailto:james.g.hazard@gmail.com>] Sent: Sunday, June 04, 2017 9:31 AM To: Adrian Gropper Cc: wg-uma@kantarainitiative.org <mailto:wg-uma@kantarainitiative.org>; eve.maler@forgerock.com <mailto:eve.maler@forgerock.com>; hardjono@media.mit.edu <mailto:hardjono@media.mit.edu> Subject: Re: [WG-UMA] New paper on Identity, Data and next-gen Federation
Great to see this discussion.
Some time ago, I did a demo of the sequence of events in writing and clearing a paper check - right at the boundary between a contract and a payment. It shows each step as a record that references other records. Some of the other records define the meaning of a step, in both text and automation. The automation is expressed in (fake) granular bits of code, referenced by their hash.
This would allow curation of granular bits of automation ("smart contracts" in a broad sense). Those could be validated by an organization or standards body.
The demo was made with the pending EU PSD2 in mind, as a way for financial institutions to collaborate on APIs. But the principle is broadly applicable to transacting in general.
http://www.commonaccord.org/index.php?action=doc&file=bqc/fr/bnpp/a5we/Account/Check/00001/06-Accept.md <http://www.commonaccord.org/index.php?action=doc&file=bqc/fr/bnpp/a5we/Account/Check/00001/06-Accept.md> (Click on "Source" and follow links.)
On Sun, Jun 4, 2017 at 5:33 AM, Adrian Gropper <agropper@healthurl.com <mailto:agropper@healthurl.com><mailto:agropper@healthurl.com <mailto:agropper@healthurl.com>>> wrote: Please, let's avoid applying the word farming to people. The Matrix will be upon us soon enough.
Adrian
On Sun, Jun 4, 2017 at 8:24 AM, Mark Lizar <mark@openconsent.com <mailto:mark@openconsent.com><mailto:mark@openconsent.com <mailto:mark@openconsent.com>>> wrote: Trust Farmer, what a great term !
Use of RAW personal data is clearly a barrier for trusted service development and this makes a lot of sense.
OPAL provides an economic, high value information argument. It also helps to illuminate a landscape for competitive service development with personal data that people control or co-manage. (Which is what I like the most:)
- Mark
On 4 Jun 2017, at 02:44, Thomas Hardjono <hardjono@mit.edu <mailto:hardjono@mit.edu><mailto:hardjono@mit.edu <mailto:hardjono@mit.edu>>> wrote:
Thanks Mark,
An easy way to illustrate the "algorithm" is to think of an SQL statement (e.g. "compute average income of people living in Cambridge MA"). I send you the SQL statement, then you compute it in your back-end data repo (behind you firewalls), and then return the result to me.
Assuming a community of Data Providers could get into a consortium governed by a trust farmer, the could collectively come-up with say 20 of these SQL queries (vetted of course).
The point of the paper is that the barrier to sharing data (raw data) is getting impossible to overcome (think GDPR), and if data-rich institutions (i.e. Banks) want to play in the identity space by monetizing their data then OPAL provides a practical/palatable approach.
From the consent side, the user needs the ability to say: "I know my data is part of data-repo-X, and I give consent for algorithm A to be executed on data-repo-X".
The data-repository also needs a recipe to prove the use had given consent.
/thomas/
________________________________________ From: Mark Lizar [mark@openconsent.com <mailto:mark@openconsent.com><mailto:mark@openconsent.com <mailto:mark@openconsent.com>>] Sent: Saturday, June 03, 2017 4:09 PM To: Thomas Hardjono Cc: wg-uma@kantarainitiative.org <mailto:wg-uma@kantarainitiative.org><mailto:wg-uma@kantarainitiative.org <mailto:wg-uma@kantarainitiative.org>>; eve.maler@forgerock.com <mailto:eve.maler@forgerock.com><mailto:eve.maler@forgerock.com <mailto:eve.maler@forgerock.com>>; hardjono@media.mit.edu <mailto:hardjono@media.mit.edu><mailto:hardjono@media.mit.edu <mailto:hardjono@media.mit.edu>> Subject: Re: New paper on Identity, Data and next-gen Federation
Hi Thomas,
You made quick work of this wickedly hard problem :-). (To run with this a bit)
This looks a lot like the consent to authorise pattern we have been discussing. Which I would define as the :
1. - purpose specification 2. - to consent permission scopes 3. - to privacy policy model clauses 4. - to UMA 5. - to contract policy clauses 6. - to permission 7. - to user control
To make this sort of thing happen I have been working on the premise that a machine readable privacy policy, is configured with a purpose category that is defined with preference scopes (or a consent type that defined scopes and maybe also preference options) which then are associated with model privacy policy clauses.
This then boils down into a consent to authorise privacy policy scope profile for UMA access, which would then be used to defines the permission scopes and the associated contract model clauses that enable people to manage and control their own information.
At which point, the data subject could bring to the party their own license, which provides the model clauses, which match the aforementioned policies and defines how the preferences are set and managed.
The whole policy model will link with the permission scopes and preferences to basically sort out all the old school policy issues that are gumming up the works currently.
With the above framework in place,
The algorithms could be defined by the purpose category (i.e. industry) configured by the consent to authorise profile, and then controlled by the individual with model clauses that delegate to trusted third party applications. This provides the higher order transparency and accountability needed - or perhaps ethics - which the user is ultimately the master controller of via a data services provider.
It is conceivable that the user could bring their own algorithims, or have algorithims that police algorithims which is reminiscent of the original cop monkey pattern (if I am not mistaken)
- Mark
On 2 Jun 2017, at 16:03, Thomas Hardjono <hardjono@mit.edu <mailto:hardjono@mit.edu><mailto:hardjono@mit.edu <mailto:hardjono@mit.edu>><mailto:hardjono@mit.edu <mailto:hardjono@mit.edu><mailto:hardjono@mit.edu <mailto:hardjono@mit.edu>>>> wrote:
Eve, Mark, UMA folks,
This new paper (PDF) might be of some use in framing-up the next level of discussions regarding "identity" and "data" and how to"federate data".
Its permanent link is here:
http://arxiv.org/abs/1705.10880 <http://arxiv.org/abs/1705.10880>
I'm thinking that the Claims-gathering flows in UMA and also the Consent-Receipts flows could use an "algorithm-identifier" value, effectively stating that "Alice consents to Algorithms X be run against her data set Y" (where the data-set lies in her private Resource Server).
Best.
/thomas/ <open-algorithms-identity-federation-1705.10880.pdf>
_______________________________________________ WG-UMA mailing list WG-UMA@kantarainitiative.org <mailto:WG-UMA@kantarainitiative.org><mailto:WG-UMA@kantarainitiative.org <mailto:WG-UMA@kantarainitiative.org>> http://kantarainitiative.org/mailman/listinfo/wg-uma <http://kantarainitiative.org/mailman/listinfo/wg-uma>
--
Adrian Gropper MD
PROTECT YOUR FUTURE - RESTORE Health Privacy! HELP us fight for the right to control personal health data.
_______________________________________________ WG-UMA mailing list WG-UMA@kantarainitiative.org <mailto:WG-UMA@kantarainitiative.org><mailto:WG-UMA@kantarainitiative.org <mailto:WG-UMA@kantarainitiative.org>> http://kantarainitiative.org/mailman/listinfo/wg-uma <http://kantarainitiative.org/mailman/listinfo/wg-uma>
-- @commonaccord _______________________________________________ WG-UMA mailing list WG-UMA@kantarainitiative.org <mailto:WG-UMA@kantarainitiative.org> http://kantarainitiative.org/mailman/listinfo/wg-uma <http://kantarainitiative.org/mailman/listinfo/wg-uma>
This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify the system manager. This message contains confidential information and is intended only for the individual named. If you are not the named addressee you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately by e-mail if you have received this e-mail by mistake and delete this e-mail from your system. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited.
_______________________________________________ WG-UMA mailing list WG-UMA@kantarainitiative.org <mailto:WG-UMA@kantarainitiative.org> http://kantarainitiative.org/mailman/listinfo/wg-uma <http://kantarainitiative.org/mailman/listinfo/wg-uma>
_______________________________________________ WG-UMA mailing list WG-UMA@kantarainitiative.org http://kantarainitiative.org/mailman/listinfo/wg-uma

There seem to be a number of threads spawned, with apologies, this misses some of the conversation. If we handle smart contracts wisely, as Ricardian "code", then "smart contracts" can be quite dumb, dumb enough to be intelligible. Really just granular, open source versions of the kind of code that currently animates interactions on proprietary websites at banks and web merchants. Schedule a payment, confirm receipt, etc. Since most interactions follow common patterns, these bits of code can probably be reused a lot. For instance, notification interactions can be used across a broad range of transaction types. Same with payments and I presume with access control. Perhaps these shouldn't be called smart contracts, indeed the phrase should be totally banned because even the "self-executing" ones aren't contracts. We could call them "dry code" in distinction to "wet code," or "code" versus "prose" in the Ricardian vocabulary. There is, however, a substantial community that insists on "smart contract." Similarly, P2P means the relationships rather than a technology. Email is a P2P technology because everyone has the same data model. Most won't self-host, but they could, they can take their data with them, and the vendor lock is light-weight. The point is that a common format and semantics greatly reduces the power of hubs. The usual open source dynamic. With respect to leverage, I don't think we need to rely on individuals to assert a P2P model. The GDPR and PSD2 both work against hubs. As far as I can see, the way that groups in Europe, even whole countries, will bring control and data close to home is with a P2P data model. PDSs. Further to Jeff's suggestion, if the policies are assembled from prose with known provenance and presented in structured form, then machine readability is easy. That also encourages codification in the legal sense, and makes it easier to get the full prose supply chain involved in collaboration - legislator, regulator, trade group, company, citizen. A thought-piece in which the GDPR is refactored to make a faux privacy policy. http://source.commonaccord.org/index.php?action=source&file=G/EU-GDPR-Law-CmA/Demo/Acme_UK.md On Sun, Jun 4, 2017 at 2:34 PM, Doc Searls <dsearls@cyber.law.harvard.edu> wrote:
Could such a thing (or things) be located at Customer Commons as one arrow (or set of arrows) in a quiver of tools at the individual’s disposal?
Doc
On Jun 4, 2017, at 4:36 PM, j stollman <stollman.j@gmail.com> wrote:
To Mark's comment regarding machine readable privacy policies, I did develop the high level design for a system to analyze privacy policies (as well as Terms of Service and other "standard" contracts). A user of the system would define his privacy preferences in a template. Using Information Extraction (a form of AI), the system then reviewed the Privacy Policy and interpreted its meaning vis a vis each user preference. It then reported the disparities. The user was not bound to enforce his preferences. The idea was to begin letting people know how egregious are many of the policies that they agree to without reading as a first step in trying to create a competitive market for such policies. Knowing the one site has a policy preferable to a similar site might begin to drive firms to create more appealing policies. But, as long as we remain in the dark about what we are signing up for, there is limited incentive for sites to improve.
To Scott's comment about preventing the alteration of contract terms after they are agreed to, one possibility is to add the agreed-to contract to a blockchain. In this way, any alteration other than a mutually agreed-to amendment would be outed by the consensus mechanism builtin to the blockchain.
Jeff
--------------------------------- Jeff Stollman stollman.j@gmail.com +1 202.683.8699 <(202)%20683-8699> <stollman.j@gmail.com>
Truth never triumphs — its opponents just die out. Science advances one funeral at a time. Max Planck
On Sun, Jun 4, 2017 at 2:15 PM, John Wunderlich <john@wunderlich.ca> wrote:
Thomas;
It seems that part of the conceptual underpinning of this is that there will be pools of what you call “RAW data” under the control of one entity or another. Presumably, given GDPR, these pools will derive authority from consent or an allowable derogation. To the extent that we want to build privacy protective systems on top of a non-privacy protective infrastructure this makes sense and is a step away from the risks and abuses we have all seen.
That being said, I wonder what the potentials are for pools of algorithms instead of pools of data. Such algorithms could make use of individuals’ data in situ as it were - perhaps querying resource servers using UMA or by linking particular algorithms to dynamically negotiated information sharing agreements. In both of these cases there is no need of a trust entity because control of the data is retained by the individual. It’s a different category of algorithmic problem, but it does seem to me to scale in a manner similar to the Internet itself and it bypasses the risk endemic in creating yoodge pools of RAW data.
Sincerely, *John Wunderlich* *(@PrivacyCDN)*
<PastedGraphic-4.tiff> <http://privacybydesign.ca/> <http://privacybydesign.ca/>
<http://privacybydesign.ca/>Privacist & PbD Ambassador <http://privacybydesign.ca/>
On Jun 4, 2017, at 10:07, Thomas Hardjono <hardjono@mit.edu> wrote:
Thanks Jim,
So in the paper I purposely omitted any mention of smart-contracts (too distracting).
We have a small project on how to make the "algorithm" (think simple SQL statement) into a smart-contract (think Ethreum).
The algorithm-smart-contract is triggered by the caller (querier) and it has to be parameterized (e.g. input the public keys of the querier and the data-repository; payments, etc).
So this is pointing towards a future model for data-markets, where these algorithm-smart-contracts are available on many node of the P2P network, and anyone can use them (with payment of course).
Not to be too hyperbolic, but think of futuristic "AI and bots" that make use of these various algorithm-smart-contracts.
/thomas/
________________________________________ From: wg-uma-bounces@kantarainitiative.org [wg-uma-bounces@kantarainitiat ive.org] on behalf of James Hazard [james.g.hazard@gmail.com] Sent: Sunday, June 04, 2017 9:31 AM To: Adrian Gropper Cc: wg-uma@kantarainitiative.org; eve.maler@forgerock.com; hardjono@media.mit.edu Subject: Re: [WG-UMA] New paper on Identity, Data and next-gen Federation
Great to see this discussion.
Some time ago, I did a demo of the sequence of events in writing and clearing a paper check - right at the boundary between a contract and a payment. It shows each step as a record that references other records. Some of the other records define the meaning of a step, in both text and automation. The automation is expressed in (fake) granular bits of code, referenced by their hash.
This would allow curation of granular bits of automation ("smart contracts" in a broad sense). Those could be validated by an organization or standards body.
The demo was made with the pending EU PSD2 in mind, as a way for financial institutions to collaborate on APIs. But the principle is broadly applicable to transacting in general.
http://www.commonaccord.org/index.php?action=doc&file=bqc/fr /bnpp/a5we/Account/Check/00001/06-Accept.md (Click on "Source" and follow links.)
On Sun, Jun 4, 2017 at 5:33 AM, Adrian Gropper <agropper@healthurl.com <mailto:agropper@healthurl.com>> wrote: Please, let's avoid applying the word farming to people. The Matrix will be upon us soon enough.
Adrian
On Sun, Jun 4, 2017 at 8:24 AM, Mark Lizar <mark@openconsent.com<mailto:m ark@openconsent.com>> wrote: Trust Farmer, what a great term !
Use of RAW personal data is clearly a barrier for trusted service development and this makes a lot of sense.
OPAL provides an economic, high value information argument. It also helps to illuminate a landscape for competitive service development with personal data that people control or co-manage. (Which is what I like the most:)
- Mark
On 4 Jun 2017, at 02:44, Thomas Hardjono <hardjono@mit.edu<mailto:hardj ono@mit.edu>> wrote:
Thanks Mark,
An easy way to illustrate the "algorithm" is to think of an SQL statement (e.g. "compute average income of people living in Cambridge MA"). I send you the SQL statement, then you compute it in your back-end data repo (behind you firewalls), and then return the result to me.
Assuming a community of Data Providers could get into a consortium governed by a trust farmer, the could collectively come-up with say 20 of these SQL queries (vetted of course).
The point of the paper is that the barrier to sharing data (raw data) is getting impossible to overcome (think GDPR), and if data-rich institutions (i.e. Banks) want to play in the identity space by monetizing their data then OPAL provides a practical/palatable approach.
From the consent side, the user needs the ability to say: "I know my data is part of data-repo-X, and I give consent for algorithm A to be executed on data-repo-X".
The data-repository also needs a recipe to prove the use had given consent.
/thomas/
________________________________________ From: Mark Lizar [mark@openconsent.com<mailto:mark@openconsent.com>] Sent: Saturday, June 03, 2017 4:09 PM To: Thomas Hardjono Cc: wg-uma@kantarainitiative.org<mailto:wg-uma@kantarainitiative.org>; eve.maler@forgerock.com<mailto:eve.maler@forgerock.com>; hardjono@media.mit.edu<mailto:hardjono@media.mit.edu> Subject: Re: New paper on Identity, Data and next-gen Federation
Hi Thomas,
You made quick work of this wickedly hard problem :-). (To run with this a bit)
This looks a lot like the consent to authorise pattern we have been discussing. Which I would define as the :
1. - purpose specification 2. - to consent permission scopes 3. - to privacy policy model clauses 4. - to UMA 5. - to contract policy clauses 6. - to permission 7. - to user control
To make this sort of thing happen I have been working on the premise that a machine readable privacy policy, is configured with a purpose category that is defined with preference scopes (or a consent type that defined scopes and maybe also preference options) which then are associated with model privacy policy clauses.
This then boils down into a consent to authorise privacy policy scope profile for UMA access, which would then be used to defines the permission scopes and the associated contract model clauses that enable people to manage and control their own information.
At which point, the data subject could bring to the party their own license, which provides the model clauses, which match the aforementioned policies and defines how the preferences are set and managed.
The whole policy model will link with the permission scopes and preferences to basically sort out all the old school policy issues that are gumming up the works currently.
With the above framework in place,
The algorithms could be defined by the purpose category (i.e. industry) configured by the consent to authorise profile, and then controlled by the individual with model clauses that delegate to trusted third party applications. This provides the higher order transparency and accountability needed - or perhaps ethics - which the user is ultimately the master controller of via a data services provider.
It is conceivable that the user could bring their own algorithims, or have algorithims that police algorithims which is reminiscent of the original cop monkey pattern (if I am not mistaken)
- Mark
On 2 Jun 2017, at 16:03, Thomas Hardjono <hardjono@mit.edu<mailto:hardj ono@mit.edu><mailto:hardjono@mit.edu<mailto:hardjono@mit.edu>>> wrote:
Eve, Mark, UMA folks,
This new paper (PDF) might be of some use in framing-up the next level of discussions regarding "identity" and "data" and how to"federate data".
Its permanent link is here:
http://arxiv.org/abs/1705.10880
I'm thinking that the Claims-gathering flows in UMA and also the Consent-Receipts flows could use an "algorithm-identifier" value, effectively stating that "Alice consents to Algorithms X be run against her data set Y" (where the data-set lies in her private Resource Server).
Best.
/thomas/ <open-algorithms-identity-federation-1705.10880.pdf>
_______________________________________________ WG-UMA mailing list WG-UMA@kantarainitiative.org<mailto:WG-UMA@kantarainitiative.org> http://kantarainitiative.org/mailman/listinfo/wg-uma
--
Adrian Gropper MD
PROTECT YOUR FUTURE - RESTORE Health Privacy! HELP us fight for the right to control personal health data.
_______________________________________________ WG-UMA mailing list WG-UMA@kantarainitiative.org<mailto:WG-UMA@kantarainitiative.org> http://kantarainitiative.org/mailman/listinfo/wg-uma
-- @commonaccord _______________________________________________ WG-UMA mailing list WG-UMA@kantarainitiative.org http://kantarainitiative.org/mailman/listinfo/wg-uma
This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify the system manager. This message contains confidential information and is intended only for the individual named. If you are not the named addressee you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately by e-mail if you have received this e-mail by mistake and delete this e-mail from your system. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited.
_______________________________________________ WG-UMA mailing list WG-UMA@kantarainitiative.org http://kantarainitiative.org/mailman/listinfo/wg-uma
_______________________________________________ WG-UMA mailing list WG-UMA@kantarainitiative.org http://kantarainitiative.org/mailman/listinfo/wg-uma
_______________________________________________ WG-UMA mailing list WG-UMA@kantarainitiative.org http://kantarainitiative.org/mailman/listinfo/wg-uma
-- @commonaccord

Scroll down.
On Jun 4, 2017, at 5:45 PM, James Hazard <james.g.hazard@gmail.com> wrote:
There seem to be a number of threads spawned, with apologies, this misses some of the conversation.
And forgive me for spawning one of those.
If we handle smart contracts wisely, as Ricardian "code", then "smart contracts" can be quite dumb, dumb enough to be intelligible. Really just granular, open source versions of the kind of code that currently animates interactions on proprietary websites at banks and web merchants. Schedule a payment, confirm receipt, etc. Since most interactions follow common patterns, these bits of code can probably be reused a lot. For instance, notification interactions can be used across a broad range of transaction types. Same with payments and I presume with access control.
Perhaps these shouldn't be called smart contracts, indeed the phrase should be totally banned because even the "self-executing" ones aren't contracts. We could call them "dry code" in distinction to "wet code," or "code" versus "prose" in the Ricardian vocabulary.
Yes.
There is, however, a substantial community that insists on "smart contract.”
Yes as well.
Similarly, P2P means the relationships rather than a technology. Email is a P2P technology because everyone has the same data model. Most won't self-host, but they could, they can take their data with them, and the vendor lock is light-weight. The point is that a common format and semantics greatly reduces the power of hubs. The usual open source dynamic.
Yes. We want more of that.
With respect to leverage, I don't think we need to rely on individuals to assert a P2P model.
If individuals are not relied on, we lose something. I suspect we lose a lot more than we already have. The individual (I am avoiding the word “user” or “consumer” here) needa to experience agency. There are, alas, a paucity of tools and services that give the individual agency. How can we relieve at least some of that? Protective policy, good as it is to have, also tends to make a power asymmetry official. We need to do more than rely on that.
The GDPR and PSD2 both work against hubs. As far as I can see, the way that groups in Europe, even whole countries, will bring control and data close to home is with a P2P data model. PDS.
I hope home here is individual agency.
Further to Jeff's suggestion, if the policies are assembled from prose with known provenance and presented in structured form, then machine readability is easy.
Whose machines are we talking about here (and forgive me for not knowing). The individual’s, or the institution’s?
That also encourages codification in the legal sense, and makes it easier to get the full prose supply chain involved in collaboration - legislator, regulator, trade group, company, citizen. A thought-piece in which the GDPR is refactored to make a faux privacy policy.
How about a real privacy contract, proffered by the individual as a first party, to which the institution agrees as a second party? Here is some of what I’ve written about this, in reverse chronological order: http://bit.ly/cstmrs1st <http://bit.ly/cstmrs1st> http://bit.ly/1stprtytrms <http://bit.ly/1stprtytrms> http://j.mp/cstledoc <http://j.mp/cstledoc> http://j.mp/n0stalkng <http://j.mp/n0stalkng> http://bit.ly/dbg1ln <http://bit.ly/dbg1ln> http://j.mp/adranch <http://j.mp/adranch> All those are about what we’re doing with Customer Commons. And we’re looking for help with it. Maybe the original paper (Open Algorithms for Identity Federation <https://arxiv.org/pdf/1705.10880v1.pdf>) can do some of that. I haven’t dug into it very far yet. I believe UMA does, though I’m still not sure. As for—
http://source.commonaccord.org/index.php?action=source&file=G/EU-GDPR-Law-CmA/Demo/Acme_UK.md <http://source.commonaccord.org/index.php?action=source&file=G/EU-GDPR-Law-CmA/Demo/Acme_UK.md>
—I’m wondering if this kind of thing can go in an http header that nods its head toward the GDPR and also points to first party terms in Customer Commons that will be agreeable to sites that wish to be GDPR compliant—and will be, because they agree as second parties to the individual’s terms, and that agreement is recorded in some way (e.g. jlinc). Doc
On Sun, Jun 4, 2017 at 2:34 PM, Doc Searls <dsearls@cyber.law.harvard.edu <mailto:dsearls@cyber.law.harvard.edu>> wrote: Could such a thing (or things) be located at Customer Commons as one arrow (or set of arrows) in a quiver of tools at the individual’s disposal?
Doc
On Jun 4, 2017, at 4:36 PM, j stollman <stollman.j@gmail.com <mailto:stollman.j@gmail.com>> wrote:
To Mark's comment regarding machine readable privacy policies, I did develop the high level design for a system to analyze privacy policies (as well as Terms of Service and other "standard" contracts). A user of the system would define his privacy preferences in a template. Using Information Extraction (a form of AI), the system then reviewed the Privacy Policy and interpreted its meaning vis a vis each user preference. It then reported the disparities. The user was not bound to enforce his preferences. The idea was to begin letting people know how egregious are many of the policies that they agree to without reading as a first step in trying to create a competitive market for such policies. Knowing the one site has a policy preferable to a similar site might begin to drive firms to create more appealing policies. But, as long as we remain in the dark about what we are signing up for, there is limited incentive for sites to improve.
To Scott's comment about preventing the alteration of contract terms after they are agreed to, one possibility is to add the agreed-to contract to a blockchain. In this way, any alteration other than a mutually agreed-to amendment would be outed by the consensus mechanism builtin to the blockchain.
Jeff
--------------------------------- Jeff Stollman stollman.j@gmail.com <mailto:stollman.j@gmail.com> +1 202.683.8699 <tel:(202)%20683-8699> <mailto:stollman.j@gmail.com>
Truth never triumphs — its opponents just die out. Science advances one funeral at a time. Max Planck
On Sun, Jun 4, 2017 at 2:15 PM, John Wunderlich <john@wunderlich.ca <mailto:john@wunderlich.ca>> wrote: Thomas;
It seems that part of the conceptual underpinning of this is that there will be pools of what you call “RAW data” under the control of one entity or another. Presumably, given GDPR, these pools will derive authority from consent or an allowable derogation. To the extent that we want to build privacy protective systems on top of a non-privacy protective infrastructure this makes sense and is a step away from the risks and abuses we have all seen.
That being said, I wonder what the potentials are for pools of algorithms instead of pools of data. Such algorithms could make use of individuals’ data in situ as it were - perhaps querying resource servers using UMA or by linking particular algorithms to dynamically negotiated information sharing agreements. In both of these cases there is no need of a trust entity because control of the data is retained by the individual. It’s a different category of algorithmic problem, but it does seem to me to scale in a manner similar to the Internet itself and it bypasses the risk endemic in creating yoodge pools of RAW data.
Sincerely, John Wunderlich (@PrivacyCDN)
<PastedGraphic-4.tiff> <http://privacybydesign.ca/> <http://privacybydesign.ca/>
<http://privacybydesign.ca/>Privacist & PbD Ambassador <http://privacybydesign.ca/>
On Jun 4, 2017, at 10:07, Thomas Hardjono <hardjono@mit.edu <mailto:hardjono@mit.edu>> wrote:
Thanks Jim,
So in the paper I purposely omitted any mention of smart-contracts (too distracting).
We have a small project on how to make the "algorithm" (think simple SQL statement) into a smart-contract (think Ethreum).
The algorithm-smart-contract is triggered by the caller (querier) and it has to be parameterized (e.g. input the public keys of the querier and the data-repository; payments, etc).
So this is pointing towards a future model for data-markets, where these algorithm-smart-contracts are available on many node of the P2P network, and anyone can use them (with payment of course).
Not to be too hyperbolic, but think of futuristic "AI and bots" that make use of these various algorithm-smart-contracts.
/thomas/
________________________________________ From: wg-uma-bounces@kantarainitiative.org <mailto:wg-uma-bounces@kantarainitiative.org> [wg-uma-bounces@kantarainitiative.org <mailto:wg-uma-bounces@kantarainitiative.org>] on behalf of James Hazard [james.g.hazard@gmail.com <mailto:james.g.hazard@gmail.com>] Sent: Sunday, June 04, 2017 9:31 AM To: Adrian Gropper Cc: wg-uma@kantarainitiative.org <mailto:wg-uma@kantarainitiative.org>; eve.maler@forgerock.com <mailto:eve.maler@forgerock.com>; hardjono@media.mit.edu <mailto:hardjono@media.mit.edu> Subject: Re: [WG-UMA] New paper on Identity, Data and next-gen Federation
Great to see this discussion.
Some time ago, I did a demo of the sequence of events in writing and clearing a paper check - right at the boundary between a contract and a payment. It shows each step as a record that references other records. Some of the other records define the meaning of a step, in both text and automation. The automation is expressed in (fake) granular bits of code, referenced by their hash.
This would allow curation of granular bits of automation ("smart contracts" in a broad sense). Those could be validated by an organization or standards body.
The demo was made with the pending EU PSD2 in mind, as a way for financial institutions to collaborate on APIs. But the principle is broadly applicable to transacting in general.
http://www.commonaccord.org/index.php?action=doc&file=bqc/fr/bnpp/a5we/Account/Check/00001/06-Accept.md <http://www.commonaccord.org/index.php?action=doc&file=bqc/fr/bnpp/a5we/Account/Check/00001/06-Accept.md> (Click on "Source" and follow links.)
On Sun, Jun 4, 2017 at 5:33 AM, Adrian Gropper <agropper@healthurl.com <mailto:agropper@healthurl.com><mailto:agropper@healthurl.com <mailto:agropper@healthurl.com>>> wrote: Please, let's avoid applying the word farming to people. The Matrix will be upon us soon enough.
Adrian
On Sun, Jun 4, 2017 at 8:24 AM, Mark Lizar <mark@openconsent.com <mailto:mark@openconsent.com><mailto:mark@openconsent.com <mailto:mark@openconsent.com>>> wrote: Trust Farmer, what a great term !
Use of RAW personal data is clearly a barrier for trusted service development and this makes a lot of sense.
OPAL provides an economic, high value information argument. It also helps to illuminate a landscape for competitive service development with personal data that people control or co-manage. (Which is what I like the most:)
- Mark
On 4 Jun 2017, at 02:44, Thomas Hardjono <hardjono@mit.edu <mailto:hardjono@mit.edu><mailto:hardjono@mit.edu <mailto:hardjono@mit.edu>>> wrote:
Thanks Mark,
An easy way to illustrate the "algorithm" is to think of an SQL statement (e.g. "compute average income of people living in Cambridge MA"). I send you the SQL statement, then you compute it in your back-end data repo (behind you firewalls), and then return the result to me.
Assuming a community of Data Providers could get into a consortium governed by a trust farmer, the could collectively come-up with say 20 of these SQL queries (vetted of course).
The point of the paper is that the barrier to sharing data (raw data) is getting impossible to overcome (think GDPR), and if data-rich institutions (i.e. Banks) want to play in the identity space by monetizing their data then OPAL provides a practical/palatable approach.
From the consent side, the user needs the ability to say: "I know my data is part of data-repo-X, and I give consent for algorithm A to be executed on data-repo-X".
The data-repository also needs a recipe to prove the use had given consent.
/thomas/
________________________________________ From: Mark Lizar [mark@openconsent.com <mailto:mark@openconsent.com><mailto:mark@openconsent.com <mailto:mark@openconsent.com>>] Sent: Saturday, June 03, 2017 4:09 PM To: Thomas Hardjono Cc: wg-uma@kantarainitiative.org <mailto:wg-uma@kantarainitiative.org><mailto:wg-uma@kantarainitiative.org <mailto:wg-uma@kantarainitiative.org>>; eve.maler@forgerock.com <mailto:eve.maler@forgerock.com><mailto:eve.maler@forgerock.com <mailto:eve.maler@forgerock.com>>; hardjono@media.mit.edu <mailto:hardjono@media.mit.edu><mailto:hardjono@media.mit.edu <mailto:hardjono@media.mit.edu>> Subject: Re: New paper on Identity, Data and next-gen Federation
Hi Thomas,
You made quick work of this wickedly hard problem :-). (To run with this a bit)
This looks a lot like the consent to authorise pattern we have been discussing. Which I would define as the :
1. - purpose specification 2. - to consent permission scopes 3. - to privacy policy model clauses 4. - to UMA 5. - to contract policy clauses 6. - to permission 7. - to user control
To make this sort of thing happen I have been working on the premise that a machine readable privacy policy, is configured with a purpose category that is defined with preference scopes (or a consent type that defined scopes and maybe also preference options) which then are associated with model privacy policy clauses.
This then boils down into a consent to authorise privacy policy scope profile for UMA access, which would then be used to defines the permission scopes and the associated contract model clauses that enable people to manage and control their own information.
At which point, the data subject could bring to the party their own license, which provides the model clauses, which match the aforementioned policies and defines how the preferences are set and managed.
The whole policy model will link with the permission scopes and preferences to basically sort out all the old school policy issues that are gumming up the works currently.
With the above framework in place,
The algorithms could be defined by the purpose category (i.e. industry) configured by the consent to authorise profile, and then controlled by the individual with model clauses that delegate to trusted third party applications. This provides the higher order transparency and accountability needed - or perhaps ethics - which the user is ultimately the master controller of via a data services provider.
It is conceivable that the user could bring their own algorithims, or have algorithims that police algorithims which is reminiscent of the original cop monkey pattern (if I am not mistaken)
- Mark
On 2 Jun 2017, at 16:03, Thomas Hardjono <hardjono@mit.edu <mailto:hardjono@mit.edu><mailto:hardjono@mit.edu <mailto:hardjono@mit.edu>><mailto:hardjono@mit.edu <mailto:hardjono@mit.edu><mailto:hardjono@mit.edu <mailto:hardjono@mit.edu>>>> wrote:
Eve, Mark, UMA folks,
This new paper (PDF) might be of some use in framing-up the next level of discussions regarding "identity" and "data" and how to"federate data".
Its permanent link is here:
http://arxiv.org/abs/1705.10880 <http://arxiv.org/abs/1705.10880>
I'm thinking that the Claims-gathering flows in UMA and also the Consent-Receipts flows could use an "algorithm-identifier" value, effectively stating that "Alice consents to Algorithms X be run against her data set Y" (where the data-set lies in her private Resource Server).
Best.
/thomas/ <open-algorithms-identity-federation-1705.10880.pdf>
_______________________________________________ WG-UMA mailing list WG-UMA@kantarainitiative.org <mailto:WG-UMA@kantarainitiative.org><mailto:WG-UMA@kantarainitiative.org <mailto:WG-UMA@kantarainitiative.org>> http://kantarainitiative.org/mailman/listinfo/wg-uma <http://kantarainitiative.org/mailman/listinfo/wg-uma>
--
Adrian Gropper MD
PROTECT YOUR FUTURE - RESTORE Health Privacy! HELP us fight for the right to control personal health data.
_______________________________________________ WG-UMA mailing list WG-UMA@kantarainitiative.org <mailto:WG-UMA@kantarainitiative.org><mailto:WG-UMA@kantarainitiative.org <mailto:WG-UMA@kantarainitiative.org>> http://kantarainitiative.org/mailman/listinfo/wg-uma <http://kantarainitiative.org/mailman/listinfo/wg-uma>
-- @commonaccord _______________________________________________ WG-UMA mailing list WG-UMA@kantarainitiative.org <mailto:WG-UMA@kantarainitiative.org> http://kantarainitiative.org/mailman/listinfo/wg-uma <http://kantarainitiative.org/mailman/listinfo/wg-uma>
This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify the system manager. This message contains confidential information and is intended only for the individual named. If you are not the named addressee you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately by e-mail if you have received this e-mail by mistake and delete this e-mail from your system. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited.
_______________________________________________ WG-UMA mailing list WG-UMA@kantarainitiative.org <mailto:WG-UMA@kantarainitiative.org> http://kantarainitiative.org/mailman/listinfo/wg-uma <http://kantarainitiative.org/mailman/listinfo/wg-uma>
_______________________________________________ WG-UMA mailing list WG-UMA@kantarainitiative.org <mailto:WG-UMA@kantarainitiative.org> http://kantarainitiative.org/mailman/listinfo/wg-uma <http://kantarainitiative.org/mailman/listinfo/wg-uma>
_______________________________________________ WG-UMA mailing list WG-UMA@kantarainitiative.org <mailto:WG-UMA@kantarainitiative.org> http://kantarainitiative.org/mailman/listinfo/wg-uma <http://kantarainitiative.org/mailman/listinfo/wg-uma>
-- @commonaccord

Scroll on ... On Sun, Jun 4, 2017 at 3:15 PM, Doc Searls <dsearls@cyber.law.harvard.edu> wrote:
Scroll down.
On Jun 4, 2017, at 5:45 PM, James Hazard <james.g.hazard@gmail.com> wrote:
There seem to be a number of threads spawned, with apologies, this misses some of the conversation.
And forgive me for spawning one of those.
If we handle smart contracts wisely, as Ricardian "code", then "smart contracts" can be quite dumb, dumb enough to be intelligible. Really just granular, open source versions of the kind of code that currently animates interactions on proprietary websites at banks and web merchants. Schedule a payment, confirm receipt, etc. Since most interactions follow common patterns, these bits of code can probably be reused a lot. For instance, notification interactions can be used across a broad range of transaction types. Same with payments and I presume with access control.
Perhaps these shouldn't be called smart contracts, indeed the phrase should be totally banned because even the "self-executing" ones aren't contracts. We could call them "dry code" in distinction to "wet code," or "code" versus "prose" in the Ricardian vocabulary.
Yes.
There is, however, a substantial community that insists on "smart contract.”
Yes as well.
Similarly, P2P means the relationships rather than a technology. Email is a P2P technology because everyone has the same data model. Most won't self-host, but they could, they can take their data with them, and the vendor lock is light-weight. The point is that a common format and semantics greatly reduces the power of hubs. The usual open source dynamic.
Yes. We want more of that.
With respect to leverage, I don't think we need to rely on individuals to assert a P2P model.
If individuals are not relied on, we lose something. I suspect we lose a lot more than we already have.
The individual (I am avoiding the word “user” or “consumer” here) needa to experience agency. There are, alas, a paucity of tools and services that give the individual agency. How can we relieve at least some of that?
Protective policy, good as it is to have, also tends to make a power asymmetry official. We need to do more than rely on that.
My point is that a P2P model gets asserted and implemented. The driving forces include GDPR and PSD2 as well as blockchains. A real P2P model permeates all layers, including the individual.
The GDPR and PSD2 both work against hubs. As far as I can see, the way that groups in Europe, even whole countries, will bring control and data close to home is with a P2P data model. PDS.
I hope home here is individual agency. "Home" is relative to the person. For a country it would be something like its borders. For an individual, it would be where the individual wants the data. When their thermostat negotiates with their furnace, presumably the home is the castle. When ordering pizza, the data home should not be much further than the pizza shop or the two parties' hosts.
Further to Jeff's suggestion, if the policies are assembled from prose with known provenance and presented in structured form, then machine readability is easy.
Whose machines are we talking about here (and forgive me for not knowing). The individual’s, or the institution’s?
The analytics should be able to run on any node (PDS). All counter-parties on equal footing.
That also encourages codification in the legal sense, and makes it easier to get the full prose supply chain involved in collaboration - legislator, regulator, trade group, company, citizen. A thought-piece in which the GDPR is refactored to make a faux privacy policy.
How about a real privacy contract, proffered by the individual as a first party, to which the institution agrees as a second party?
Not quite sure what you mean here. The point is not who "offers" the policy, but what it says. Just as INCO Terms mean the same thing whether initiated by the shipper or the receiver. If the institution "offers" a policy that has a clear provenance the individual can check, it has the same legal effect as if the individual selected the policy and offered it to the institution. The essential is the common sourcing and transparent provenance.
Here is some of what I’ve written about this, in reverse chronological order:
http://bit.ly/cstmrs1st http://bit.ly/1stprtytrms http://j.mp/cstledoc http://j.mp/n0stalkng http://bit.ly/dbg1ln http://j.mp/adranch
All those are about what we’re doing with Customer Commons. And we’re looking for help with it. Maybe the original paper (Open Algorithms for Identity Federation <https://arxiv.org/pdf/1705.10880v1.pdf>) can do some of that. I haven’t dug into it very far yet. I believe UMA does, though I’m still not sure.
As for—
http://source.commonaccord.org/index.php?action=source& file=G/EU-GDPR-Law-CmA/Demo/Acme_UK.md
—I’m wondering if this kind of thing can go in an http header that nods its head toward the GDPR and also points to first party terms in Customer Commons that will be agreeable to sites that wish to be GDPR compliant—and will be, because they agree as second parties to the individual’s terms, and that agreement is recorded in some way (e.g. jlinc).
A more direct approach might be for Customer Commons to organize recommended terms as prose objects, create a number of endpoints for various situations and invite people to use and improve. Like Creative Commons, but iterative. Victor Grey of jlinc and I happen to have an appointment tomorrow!
Doc
On Sun, Jun 4, 2017 at 2:34 PM, Doc Searls <dsearls@cyber.law.harvard.edu> wrote:
Could such a thing (or things) be located at Customer Commons as one arrow (or set of arrows) in a quiver of tools at the individual’s disposal?
Doc
On Jun 4, 2017, at 4:36 PM, j stollman <stollman.j@gmail.com> wrote:
To Mark's comment regarding machine readable privacy policies, I did develop the high level design for a system to analyze privacy policies (as well as Terms of Service and other "standard" contracts). A user of the system would define his privacy preferences in a template. Using Information Extraction (a form of AI), the system then reviewed the Privacy Policy and interpreted its meaning vis a vis each user preference. It then reported the disparities. The user was not bound to enforce his preferences. The idea was to begin letting people know how egregious are many of the policies that they agree to without reading as a first step in trying to create a competitive market for such policies. Knowing the one site has a policy preferable to a similar site might begin to drive firms to create more appealing policies. But, as long as we remain in the dark about what we are signing up for, there is limited incentive for sites to improve.
To Scott's comment about preventing the alteration of contract terms after they are agreed to, one possibility is to add the agreed-to contract to a blockchain. In this way, any alteration other than a mutually agreed-to amendment would be outed by the consensus mechanism builtin to the blockchain.
Jeff
--------------------------------- Jeff Stollman stollman.j@gmail.com +1 202.683.8699 <(202)%20683-8699> <stollman.j@gmail.com>
Truth never triumphs — its opponents just die out. Science advances one funeral at a time. Max Planck
On Sun, Jun 4, 2017 at 2:15 PM, John Wunderlich <john@wunderlich.ca> wrote:
Thomas;
It seems that part of the conceptual underpinning of this is that there will be pools of what you call “RAW data” under the control of one entity or another. Presumably, given GDPR, these pools will derive authority from consent or an allowable derogation. To the extent that we want to build privacy protective systems on top of a non-privacy protective infrastructure this makes sense and is a step away from the risks and abuses we have all seen.
That being said, I wonder what the potentials are for pools of algorithms instead of pools of data. Such algorithms could make use of individuals’ data in situ as it were - perhaps querying resource servers using UMA or by linking particular algorithms to dynamically negotiated information sharing agreements. In both of these cases there is no need of a trust entity because control of the data is retained by the individual. It’s a different category of algorithmic problem, but it does seem to me to scale in a manner similar to the Internet itself and it bypasses the risk endemic in creating yoodge pools of RAW data.
Sincerely, *John Wunderlich* *(@PrivacyCDN)*
<PastedGraphic-4.tiff> <http://privacybydesign.ca/> <http://privacybydesign.ca/>
<http://privacybydesign.ca/>Privacist & PbD Ambassador <http://privacybydesign.ca/>
On Jun 4, 2017, at 10:07, Thomas Hardjono <hardjono@mit.edu> wrote:
Thanks Jim,
So in the paper I purposely omitted any mention of smart-contracts (too distracting).
We have a small project on how to make the "algorithm" (think simple SQL statement) into a smart-contract (think Ethreum).
The algorithm-smart-contract is triggered by the caller (querier) and it has to be parameterized (e.g. input the public keys of the querier and the data-repository; payments, etc).
So this is pointing towards a future model for data-markets, where these algorithm-smart-contracts are available on many node of the P2P network, and anyone can use them (with payment of course).
Not to be too hyperbolic, but think of futuristic "AI and bots" that make use of these various algorithm-smart-contracts.
/thomas/
________________________________________ From: wg-uma-bounces@kantarainitiative.org [ wg-uma-bounces@kantarainitiative.org] on behalf of James Hazard [ james.g.hazard@gmail.com] Sent: Sunday, June 04, 2017 9:31 AM To: Adrian Gropper Cc: wg-uma@kantarainitiative.org; eve.maler@forgerock.com; hardjono@media.mit.edu Subject: Re: [WG-UMA] New paper on Identity, Data and next-gen Federation
Great to see this discussion.
Some time ago, I did a demo of the sequence of events in writing and clearing a paper check - right at the boundary between a contract and a payment. It shows each step as a record that references other records. Some of the other records define the meaning of a step, in both text and automation. The automation is expressed in (fake) granular bits of code, referenced by their hash.
This would allow curation of granular bits of automation ("smart contracts" in a broad sense). Those could be validated by an organization or standards body.
The demo was made with the pending EU PSD2 in mind, as a way for financial institutions to collaborate on APIs. But the principle is broadly applicable to transacting in general.
http://www.commonaccord.org/index.php?action=doc&file=bqc/fr /bnpp/a5we/Account/Check/00001/06-Accept.md (Click on "Source" and follow links.)
On Sun, Jun 4, 2017 at 5:33 AM, Adrian Gropper <agropper@healthurl.com <mailto:agropper@healthurl.com>> wrote: Please, let's avoid applying the word farming to people. The Matrix will be upon us soon enough.
Adrian
On Sun, Jun 4, 2017 at 8:24 AM, Mark Lizar <mark@openconsent.com<mailto: mark@openconsent.com>> wrote: Trust Farmer, what a great term !
Use of RAW personal data is clearly a barrier for trusted service development and this makes a lot of sense.
OPAL provides an economic, high value information argument. It also helps to illuminate a landscape for competitive service development with personal data that people control or co-manage. (Which is what I like the most:)
- Mark
On 4 Jun 2017, at 02:44, Thomas Hardjono <hardjono@mit.edu<mailto:hardj ono@mit.edu>> wrote:
Thanks Mark,
An easy way to illustrate the "algorithm" is to think of an SQL statement (e.g. "compute average income of people living in Cambridge MA"). I send you the SQL statement, then you compute it in your back-end data repo (behind you firewalls), and then return the result to me.
Assuming a community of Data Providers could get into a consortium governed by a trust farmer, the could collectively come-up with say 20 of these SQL queries (vetted of course).
The point of the paper is that the barrier to sharing data (raw data) is getting impossible to overcome (think GDPR), and if data-rich institutions (i.e. Banks) want to play in the identity space by monetizing their data then OPAL provides a practical/palatable approach.
From the consent side, the user needs the ability to say: "I know my data is part of data-repo-X, and I give consent for algorithm A to be executed on data-repo-X".
The data-repository also needs a recipe to prove the use had given consent.
/thomas/
________________________________________ From: Mark Lizar [mark@openconsent.com<mailto:mark@openconsent.com>] Sent: Saturday, June 03, 2017 4:09 PM To: Thomas Hardjono Cc: wg-uma@kantarainitiative.org<mailto:wg-uma@kantarainitiative.org>; eve.maler@forgerock.com<mailto:eve.maler@forgerock.com>; hardjono@media.mit.edu<mailto:hardjono@media.mit.edu> Subject: Re: New paper on Identity, Data and next-gen Federation
Hi Thomas,
You made quick work of this wickedly hard problem :-). (To run with this a bit)
This looks a lot like the consent to authorise pattern we have been discussing. Which I would define as the :
1. - purpose specification 2. - to consent permission scopes 3. - to privacy policy model clauses 4. - to UMA 5. - to contract policy clauses 6. - to permission 7. - to user control
To make this sort of thing happen I have been working on the premise that a machine readable privacy policy, is configured with a purpose category that is defined with preference scopes (or a consent type that defined scopes and maybe also preference options) which then are associated with model privacy policy clauses.
This then boils down into a consent to authorise privacy policy scope profile for UMA access, which would then be used to defines the permission scopes and the associated contract model clauses that enable people to manage and control their own information.
At which point, the data subject could bring to the party their own license, which provides the model clauses, which match the aforementioned policies and defines how the preferences are set and managed.
The whole policy model will link with the permission scopes and preferences to basically sort out all the old school policy issues that are gumming up the works currently.
With the above framework in place,
The algorithms could be defined by the purpose category (i.e. industry) configured by the consent to authorise profile, and then controlled by the individual with model clauses that delegate to trusted third party applications. This provides the higher order transparency and accountability needed - or perhaps ethics - which the user is ultimately the master controller of via a data services provider.
It is conceivable that the user could bring their own algorithims, or have algorithims that police algorithims which is reminiscent of the original cop monkey pattern (if I am not mistaken)
- Mark
On 2 Jun 2017, at 16:03, Thomas Hardjono <hardjono@mit.edu<mailto:hardj ono@mit.edu><mailto:hardjono@mit.edu<mailto:hardjono@mit.edu>>> wrote:
Eve, Mark, UMA folks,
This new paper (PDF) might be of some use in framing-up the next level of discussions regarding "identity" and "data" and how to"federate data".
Its permanent link is here:
http://arxiv.org/abs/1705.10880
I'm thinking that the Claims-gathering flows in UMA and also the Consent-Receipts flows could use an "algorithm-identifier" value, effectively stating that "Alice consents to Algorithms X be run against her data set Y" (where the data-set lies in her private Resource Server).
Best.
/thomas/ <open-algorithms-identity-federation-1705.10880.pdf>
_______________________________________________ WG-UMA mailing list WG-UMA@kantarainitiative.org<mailto:WG-UMA@kantarainitiative.org> http://kantarainitiative.org/mailman/listinfo/wg-uma
--
Adrian Gropper MD
PROTECT YOUR FUTURE - RESTORE Health Privacy! HELP us fight for the right to control personal health data.
_______________________________________________ WG-UMA mailing list WG-UMA@kantarainitiative.org<mailto:WG-UMA@kantarainitiative.org> http://kantarainitiative.org/mailman/listinfo/wg-uma
-- @commonaccord _______________________________________________ WG-UMA mailing list WG-UMA@kantarainitiative.org http://kantarainitiative.org/mailman/listinfo/wg-uma
This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify the system manager. This message contains confidential information and is intended only for the individual named. If you are not the named addressee you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately by e-mail if you have received this e-mail by mistake and delete this e-mail from your system. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited.
_______________________________________________ WG-UMA mailing list WG-UMA@kantarainitiative.org http://kantarainitiative.org/mailman/listinfo/wg-uma
_______________________________________________ WG-UMA mailing list WG-UMA@kantarainitiative.org http://kantarainitiative.org/mailman/listinfo/wg-uma
_______________________________________________ WG-UMA mailing list WG-UMA@kantarainitiative.org http://kantarainitiative.org/mailman/listinfo/wg-uma
-- @commonaccord
-- @commonaccord

Hi John,
That being said, I wonder what the potentials are for pools of algorithms instead of pools of data.
Right - you hit it on the head, John :-)
Such algorithms could make use of individuals’ data in situ as it were - perhaps querying resource servers using UMA or by linking particular algorithms to dynamically negotiated information sharing agreements.
Yup again -- this is why I think UMA and Consent-Receipts play such an important role. The agreement could in fact state the algorithm(s) and the type of data (e.g. column headings) permitted to be run against.
In both of these cases there is no need of a trust entity because control of the data is retained by the individual. It’s a different category of algorithmic problem, but it does seem to me to scale in a manner similar to the Internet itself and it bypasses the risk endemic in creating yoodge pools of RAW data.
Right. The example I often use is home-owners with IoT-devices (e.g. appliances) that collect data about the household. Individually, the data-set from 1 home may not be so interesting. But an entire town or city would make the data super-valuable not only for government (e.g. urban planning) but also for private sector. /thomas/ ________________________________________ From: John Wunderlich [john@wunderlich.ca] Sent: Sunday, June 04, 2017 2:15 PM To: Thomas Hardjono Cc: James Hazard; Adrian Gropper; wg-uma@kantarainitiative.org; Eve Maler; hardjono@media.mit.edu Subject: Re: [WG-UMA] New paper on Identity, Data and next-gen Federation Thomas; It seems that part of the conceptual underpinning of this is that there will be pools of what you call “RAW data” under the control of one entity or another. Presumably, given GDPR, these pools will derive authority from consent or an allowable derogation. To the extent that we want to build privacy protective systems on top of a non-privacy protective infrastructure this makes sense and is a step away from the risks and abuses we have all seen. That being said, I wonder what the potentials are for pools of algorithms instead of pools of data. Such algorithms could make use of individuals’ data in situ as it were - perhaps querying resource servers using UMA or by linking particular algorithms to dynamically negotiated information sharing agreements. In both of these cases there is no need of a trust entity because control of the data is retained by the individual. It’s a different category of algorithmic problem, but it does seem to me to scale in a manner similar to the Internet itself and it bypasses the risk endemic in creating yoodge pools of RAW data. Sincerely, John Wunderlich (@PrivacyCDN) <http://privacybydesign.ca><http://privacybydesign.ca> <http://privacybydesign.ca>Privacist & PbD Ambassador<http://privacybydesign.ca> On Jun 4, 2017, at 10:07, Thomas Hardjono <hardjono@mit.edu<mailto:hardjono@mit.edu>> wrote: Thanks Jim, So in the paper I purposely omitted any mention of smart-contracts (too distracting). We have a small project on how to make the "algorithm" (think simple SQL statement) into a smart-contract (think Ethreum). The algorithm-smart-contract is triggered by the caller (querier) and it has to be parameterized (e.g. input the public keys of the querier and the data-repository; payments, etc). So this is pointing towards a future model for data-markets, where these algorithm-smart-contracts are available on many node of the P2P network, and anyone can use them (with payment of course). Not to be too hyperbolic, but think of futuristic "AI and bots" that make use of these various algorithm-smart-contracts. /thomas/ ________________________________________ From: wg-uma-bounces@kantarainitiative.org<mailto:wg-uma-bounces@kantarainitiative.org> [wg-uma-bounces@kantarainitiative.org<mailto:wg-uma-bounces@kantarainitiative.org>] on behalf of James Hazard [james.g.hazard@gmail.com<mailto:james.g.hazard@gmail.com>] Sent: Sunday, June 04, 2017 9:31 AM To: Adrian Gropper Cc: wg-uma@kantarainitiative.org<mailto:wg-uma@kantarainitiative.org>; eve.maler@forgerock.com<mailto:eve.maler@forgerock.com>; hardjono@media.mit.edu<mailto:hardjono@media.mit.edu> Subject: Re: [WG-UMA] New paper on Identity, Data and next-gen Federation Great to see this discussion. Some time ago, I did a demo of the sequence of events in writing and clearing a paper check - right at the boundary between a contract and a payment. It shows each step as a record that references other records. Some of the other records define the meaning of a step, in both text and automation. The automation is expressed in (fake) granular bits of code, referenced by their hash. This would allow curation of granular bits of automation ("smart contracts" in a broad sense). Those could be validated by an organization or standards body. The demo was made with the pending EU PSD2 in mind, as a way for financial institutions to collaborate on APIs. But the principle is broadly applicable to transacting in general. http://www.commonaccord.org/index.php?action=doc&file=bqc/fr/bnpp/a5we/Account/Check/00001/06-Accept.md (Click on "Source" and follow links.) On Sun, Jun 4, 2017 at 5:33 AM, Adrian Gropper <agropper@healthurl.com<mailto:agropper@healthurl.com>> wrote: Please, let's avoid applying the word farming to people. The Matrix will be upon us soon enough. Adrian On Sun, Jun 4, 2017 at 8:24 AM, Mark Lizar <mark@openconsent.com<mailto:mark@openconsent.com>> wrote: Trust Farmer, what a great term ! Use of RAW personal data is clearly a barrier for trusted service development and this makes a lot of sense. OPAL provides an economic, high value information argument. It also helps to illuminate a landscape for competitive service development with personal data that people control or co-manage. (Which is what I like the most:) - Mark On 4 Jun 2017, at 02:44, Thomas Hardjono <hardjono@mit.edu<mailto:hardjono@mit.edu>> wrote: Thanks Mark, An easy way to illustrate the "algorithm" is to think of an SQL statement (e.g. "compute average income of people living in Cambridge MA"). I send you the SQL statement, then you compute it in your back-end data repo (behind you firewalls), and then return the result to me. Assuming a community of Data Providers could get into a consortium governed by a trust farmer, the could collectively come-up with say 20 of these SQL queries (vetted of course). The point of the paper is that the barrier to sharing data (raw data) is getting impossible to overcome (think GDPR), and if data-rich institutions (i.e. Banks) want to play in the identity space by monetizing their data then OPAL provides a practical/palatable approach.
From the consent side, the user needs the ability to say: "I know my data is part of data-repo-X, and I give consent for algorithm A to be executed on data-repo-X".
The data-repository also needs a recipe to prove the use had given consent. /thomas/ ________________________________________ From: Mark Lizar [mark@openconsent.com<mailto:mark@openconsent.com>] Sent: Saturday, June 03, 2017 4:09 PM To: Thomas Hardjono Cc: wg-uma@kantarainitiative.org<mailto:wg-uma@kantarainitiative.org>; eve.maler@forgerock.com<mailto:eve.maler@forgerock.com>; hardjono@media.mit.edu<mailto:hardjono@media.mit.edu> Subject: Re: New paper on Identity, Data and next-gen Federation Hi Thomas, You made quick work of this wickedly hard problem :-). (To run with this a bit) This looks a lot like the consent to authorise pattern we have been discussing. Which I would define as the : 1. - purpose specification 2. - to consent permission scopes 3. - to privacy policy model clauses 4. - to UMA 5. - to contract policy clauses 6. - to permission 7. - to user control To make this sort of thing happen I have been working on the premise that a machine readable privacy policy, is configured with a purpose category that is defined with preference scopes (or a consent type that defined scopes and maybe also preference options) which then are associated with model privacy policy clauses. This then boils down into a consent to authorise privacy policy scope profile for UMA access, which would then be used to defines the permission scopes and the associated contract model clauses that enable people to manage and control their own information. At which point, the data subject could bring to the party their own license, which provides the model clauses, which match the aforementioned policies and defines how the preferences are set and managed. The whole policy model will link with the permission scopes and preferences to basically sort out all the old school policy issues that are gumming up the works currently. With the above framework in place, The algorithms could be defined by the purpose category (i.e. industry) configured by the consent to authorise profile, and then controlled by the individual with model clauses that delegate to trusted third party applications. This provides the higher order transparency and accountability needed - or perhaps ethics - which the user is ultimately the master controller of via a data services provider. It is conceivable that the user could bring their own algorithims, or have algorithims that police algorithims which is reminiscent of the original cop monkey pattern (if I am not mistaken) - Mark On 2 Jun 2017, at 16:03, Thomas Hardjono <hardjono@mit.edu<mailto:hardjono@mit.edu><mailto:hardjono@mit.edu<mailto:hardjono@mit.edu>>> wrote: Eve, Mark, UMA folks, This new paper (PDF) might be of some use in framing-up the next level of discussions regarding "identity" and "data" and how to"federate data". Its permanent link is here: http://arxiv.org/abs/1705.10880 I'm thinking that the Claims-gathering flows in UMA and also the Consent-Receipts flows could use an "algorithm-identifier" value, effectively stating that "Alice consents to Algorithms X be run against her data set Y" (where the data-set lies in her private Resource Server). Best. /thomas/ <open-algorithms-identity-federation-1705.10880.pdf> _______________________________________________ WG-UMA mailing list WG-UMA@kantarainitiative.org<mailto:WG-UMA@kantarainitiative.org> http://kantarainitiative.org/mailman/listinfo/wg-uma -- Adrian Gropper MD PROTECT YOUR FUTURE - RESTORE Health Privacy! HELP us fight for the right to control personal health data. _______________________________________________ WG-UMA mailing list WG-UMA@kantarainitiative.org<mailto:WG-UMA@kantarainitiative.org> http://kantarainitiative.org/mailman/listinfo/wg-uma -- @commonaccord _______________________________________________ WG-UMA mailing list WG-UMA@kantarainitiative.org http://kantarainitiative.org/mailman/listinfo/wg-uma This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify the system manager. This message contains confidential information and is intended only for the individual named. If you are not the named addressee you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately by e-mail if you have received this e-mail by mistake and delete this e-mail from your system. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited.

Adrian, I interpreted Trust Farming as farming trust in organisations. (Not the other way around :-)
On 4 Jun 2017, at 13:33, Adrian Gropper <agropper@healthurl.com> wrote:
Please, let's avoid applying the word farming to people. The Matrix will be upon us soon enough.
Adrian
participants (9)
-
Adrian Gropper
-
Andrew Hughes
-
Doc Searls
-
j stollman
-
James Hazard
-
John Wunderlich
-
Mark Lizar
-
sldavid
-
Thomas Hardjono