Most of the below is over my head, or at least outside the bounds of my time and patience.

In case it helps, I suggest that anything called “smart” that modifies a noun (such as “contract”), and substitutes machine agency for for the human kind—and will not permit humans to inspect its workings (in other words, operate as a black box, either intentionally or in effect)—should be off the table here. If it’s not, let me know and I’ll drop out.

If it is off the table, let me add that the need to maintain personal agency should be our pole star as we battle in the night to maintain, or advance, what agency we have left in a world where we are already all but required to operate, always as second parties, with agency diminished by design, inside giant first party silos and silo wannabes.

It may help to note that investment by giant silos in maintaining control of whole captive populations, and in giant silo wannabes, dwarfs investment in tools, such as UMA, supporting personal agency.

I still think we can win, by the way. I don’t think we can do it by leveraging the rhetoric and passing obsessions (e.g. big data, AI, ML, adtech and martech) of those vested in manipulating us—even if it’s for our own good. It’s still more about selling us shit.

FWIW—and my mind is not made up on this—“data markets,” as conceived so far, is still all about selling us shit, because all the imaginings and developments I’ve seen in that direction only operate in conditions where we are buying shit and others are selling shit or wanting to make us want to buy kind of shit other companies are paying to push at us. This bounds life inside markets, and bounds markets inside transactions. We risk losing at both while we fail to win at liberating our lives from those who only want to sell us shit.

Doc


On Jun 4, 2017, at 12:53 PM, sldavid <sldavid@uw.edu> wrote:

HI folks - I am coming into this particular thread late, so I may be missing the initial point. Begging folks' indulgence if this is a "fork"(I hope it is not) -  on the prior couple of points, query whether the AI and bots "use" the smart contracts or, instead, do the smart contracts actually "compose/comprise" the distributed AI (and "good bot") system?  In other words, will smart contracts be the "synapses" of AI/SI systems?

Stated otherwise, are standard smart contracts a way of hybridizing AI with SI (see below on "SI" concept), where the synthesis of multiple instantiations of the identical smart contracts across nodes (a la "distributed ledger") enables a form of suppression (self-regulation?) of errant smart contract application that might reflect intentional or accidental out-of-bounds activity on a network?  

Might standard contracts be used to establish a form of system "self" (albeit an intangible form of system "self") that can distinguish system "other" (like learned immunity systems), and police (and enforce) terms to reify the "self" that is reflective of the optimal system function (at least optimal as programmed in the smart contracts!)

Are P2P networks a form of "SI" (Synthetic Intelligence) that is synthesized into a form of system intelligence that is greater than the sum of its parts, and which reflects an additional potential architecture for distributed AI?

Part of our challenge in these sorts of distributed intelligence settings is , from a graph theory perspective, to get each "node" to be singing off the same song sheet.  When those nodes are people, narratives (such as from religion, politics, social norms, economic incentives, ethical constructions, etc.) are the "song sheet/programming" of the nodes.  

When those nodes are commercial organizations, then economic goals (as required by their respective state laws of formation, articles, bylaws and contracts) and regulatory constraints are the "song sheet/programming" of the nodes.  

As an aside, a fundamental challenge of the Internet is that it is a shared infrastructure among groups that are singing off different song sheets.  For example, when people are pursuing advice online about a social/emotional challenge (for example), they are also (below their awareness) sharing the communication space (and the communication) with commercial actors which (under current TOU/TOS arrangements, etc.) are able to access their communication and hence glean insight into that person's behavior/motivations and sell them some "placebo/face cream/whatever" that is offered to help fill the voids (both epidermic and existential, etc.) which fulfills the commercial goal of generating revenue, etc.  Smart contracts will provide a mechanism for us to calibrate, normalize (and analyze, and improve) the balance of "information arbitrage" that is currently skewed in these shared spaces.  Those imbalances are currently an artifact of yesterday's power relationships and its technical and legal embodiment in the Internet. 

Based on the foregoing observations/quasi-rant, it seems likely that smart contracts will find initial deployment in the simplest (lowest risk) contract settings - those with few input variables (called "conditions" in law) and applied in innocuous settings.  

For example, is a useful model of these "distributed smart contract" deployments the old (distributed) mechanical parking meter system, where each parking meter is built/programmed to register a certain amount of time for a given payment, and is constrained in its pre-programmed flexibility.  The parking meter system, in gross, represents a form of mechanistic "neighborhood watch," but not against various crimes, but only the specific violation of unpaid parking.  Simple, few variables, innocuous - reliable.

Standard contracts (such as the fedex(tm) standard shipping terms, or the standard forms of PCI-DSS conformant credit card terms) create an intrinsic distributed (albeit intangible) risk-sharing topology that is reified with the serial agreements of its participants.  In other words, every contracting party becomes an interested party in the reliability of the system when they sign up and hence depend on that system.  This is particularly so in executory contract settings, or those with minimal signing ceremonies (i.e., "click to accept"), the recruitment of participants into the constraints of the agreement is made as simple as possible (so as not to distract the raw system recruit from their enthusiastic information-seeking behavior!)

All of this is intended to convey the suggestion that there are strategies that can help to address the (appropriately) perceived dangers of "smart contracts" being enforced in inappropriate settings, or despite the failure of party expectations.  Settings in which "Dumb contracts" (like mechanical parking meters, shipping agreements, etc.,) are already successfully applied at large scales may provide favorable piloting circumstances.

The constraint of the system is a source of security, since it means that a parking meter cannot, for example, be accidentally or intentionally reprogrammed to do less innocuous things.  Can smart contracts be "hardened" against such reprogramming?  Perhaps (in the alternative) the immediate "externality" of the smart contract can detect and dampen functional drift of a given smart contract?  One similar idea that we have been fussing with in some security discussions is a P2P "neighborhood watch" AMONG IoT devices.  Distributed solutions for distributed challenges.

But I digress. . . 

Kind regards, 
Scott

Scott L. David

Director of Policy
Center for Information Assurance and Cybersecurity
University of Washington - Applied Physics Laboratory

w- 206-897-1466
m- 206-715-0859
Tw - @ScottLDavid




From: wg-uma-bounces@kantarainitiative.org <wg-uma-bounces@kantarainitiative.org> on behalf of Thomas Hardjono <hardjono@mit.edu>
Sent: Sunday, June 4, 2017 7:07 AM
To: James Hazard; Adrian Gropper
Cc: wg-uma@kantarainitiative.org; eve.maler@forgerock.com; hardjono@media.mit.edu
Subject: Re: [WG-UMA] New paper on Identity, Data and next-gen Federation
 

Thanks Jim,

So in the paper I purposely omitted any mention of smart-contracts (too distracting).

We have a small project on how to make the "algorithm" (think simple SQL statement) into a smart-contract (think Ethreum).

The algorithm-smart-contract is triggered by the caller (querier) and it has to be parameterized (e.g. input the public keys of the querier and the data-repository; payments, etc).

So this is pointing towards a future model for data-markets, where these algorithm-smart-contracts are available on many node of the P2P network, and anyone can use them (with payment of course).

Not to be too hyperbolic, but think of futuristic "AI and bots" that make use of these various algorithm-smart-contracts.

/thomas/



________________________________________
From: wg-uma-bounces@kantarainitiative.org [wg-uma-bounces@kantarainitiative.org] on behalf of James Hazard [james.g.hazard@gmail.com]
Sent: Sunday, June 04, 2017 9:31 AM
To: Adrian Gropper
Cc: wg-uma@kantarainitiative.orgeve.maler@forgerock.comhardjono@media.mit.edu
Subject: Re: [WG-UMA] New paper on Identity, Data and next-gen Federation

Great to see this discussion.

Some time ago, I did a demo of the sequence of events in writing and clearing a paper check - right at the boundary between a contract and a payment.  It shows each step as a record that references other records.  Some of the other records define the meaning of a step, in both text and automation.  The automation is expressed in (fake) granular bits of code, referenced by their hash.

This would allow curation of granular bits of automation ("smart contracts" in a broad sense). Those could be validated by an organization or standards body.

The demo was made with the pending EU PSD2 in mind, as a way for financial institutions to collaborate on APIs.  But the principle is broadly applicable to transacting in general.

http://www.commonaccord.org/index.php?action=doc&file=bqc/fr/bnpp/a5we/Account/Check/00001/06-Accept.md
(Click on "Source" and follow links.)



On Sun, Jun 4, 2017 at 5:33 AM, Adrian Gropper <agropper@healthurl.com<mailto:agropper@healthurl.com>> wrote:
Please, let's avoid applying the word farming to people. The Matrix will be upon us soon enough.

Adrian

On Sun, Jun 4, 2017 at 8:24 AM, Mark Lizar <mark@openconsent.com<mailto:mark@openconsent.com>> wrote:
Trust Farmer,  what a great term !

Use of RAW personal data is clearly a barrier for trusted service development and this makes a lot of sense.

 OPAL provides an economic, high value information argument.  It also helps to illuminate a landscape for competitive service development with personal data that people control or co-manage.    (Which is what I like the most:)

- Mark


> On 4 Jun 2017, at 02:44, Thomas Hardjono <hardjono@mit.edu<mailto:hardjono@mit.edu>> wrote:
>
>
> Thanks Mark,
>
> An easy way to illustrate the "algorithm" is to think of an SQL statement (e.g. "compute average income of people living in Cambridge MA").  I send you the SQL statement, then you compute it in your back-end data repo (behind you firewalls), and then return the result to me.
>
> Assuming a community of Data Providers could get into a consortium governed by a trust farmer, the could collectively come-up with say 20 of these SQL queries (vetted of course).
>
> The point of the paper is that the barrier to sharing data (raw data) is getting impossible to overcome (think GDPR), and if data-rich institutions (i.e. Banks) want to play in the identity space by monetizing their data then OPAL provides a practical/palatable approach.
>
> From the consent side, the user needs the ability to say:  "I know my data is part of data-repo-X, and I give consent for algorithm A to be executed on data-repo-X".
>
> The data-repository also needs a recipe to prove the use had given consent.
>
> /thomas/
>
>
>
>
> ________________________________________
> From: Mark Lizar [mark@openconsent.com<mailto:mark@openconsent.com>]
> Sent: Saturday, June 03, 2017 4:09 PM
> To: Thomas Hardjono
> Cc: wg-uma@kantarainitiative.org<mailto:wg-uma@kantarainitiative.org>; eve.maler@forgerock.com<mailto:eve.maler@forgerock.com>; hardjono@media.mit.edu<mailto:hardjono@media.mit.edu>
> Subject: Re: New paper on Identity, Data and next-gen Federation
>
> Hi Thomas,
>
> You made quick work of this wickedly hard problem :-). (To run with this a bit)
>
> This looks a lot like the consent to authorise pattern we have been discussing.  Which I would define as the :
>
>  1.  -  purpose specification
>  2.  -  to consent permission scopes
>  3.  - to privacy policy model clauses
>  4.  - to UMA
>  5.  - to contract policy clauses
>  6.  - to permission
>  7.  - to user control
>
> To make this sort of thing happen I have been working on the premise that a machine readable privacy policy, is configured with a purpose category that is defined with preference scopes (or a consent type that defined scopes and maybe also preference options) which then are associated with model privacy policy clauses.
>
> This then boils down into a  consent to authorise privacy policy scope profile for UMA access, which would then be used to defines the permission scopes and the associated contract model clauses that enable people to manage and control their own information.
>
> At which point,  the data subject could bring to the party their own license, which provides the model clauses, which match the aforementioned policies and defines how the preferences are set and managed.
>
> The whole policy model will link with the permission scopes and preferences to basically sort out all the old school policy issues that are gumming up the works currently.
>
> With the above framework in place,
>
> The algorithms could be defined by the purpose category (i.e. industry) configured by the consent to authorise profile,  and then controlled by the individual with model clauses that delegate to trusted third party applications.  This provides the higher order transparency and accountability needed - or perhaps ethics - which the user is ultimately the master controller of via a data services provider.
>
> It is conceivable that the user could bring their own algorithims, or have algorithims that police algorithims which is reminiscent of the original cop monkey pattern (if I am not mistaken)
>
>
>
> - Mark
>
>
> On 2 Jun 2017, at 16:03, Thomas Hardjono <hardjono@mit.edu<mailto:hardjono@mit.edu><mailto:hardjono@mit.edu<mailto:hardjono@mit.edu>>> wrote:
>
>
> Eve, Mark, UMA folks,
>
> This new paper (PDF) might be of some use in framing-up the next level of discussions regarding "identity" and "data" and how to"federate data".
>
> Its permanent link is here:
>
http://arxiv.org/abs/1705.10880
>
> I'm thinking that the Claims-gathering flows in UMA and also the Consent-Receipts flows could use an "algorithm-identifier" value, effectively stating that "Alice consents to Algorithms X be run against her data set Y" (where the data-set lies in her private Resource Server).
>
>
> Best.
>
> /thomas/
> <open-algorithms-identity-federation-1705.10880.pdf>

_______________________________________________
WG-UMA mailing list
WG-UMA@kantarainitiative.org<mailto:WG-UMA@kantarainitiative.org>
http://kantarainitiative.org/mailman/listinfo/wg-uma





--

Adrian Gropper MD

PROTECT YOUR FUTURE - RESTORE Health Privacy!
HELP us fight for the right to control personal health data.

_______________________________________________
WG-UMA mailing list
WG-UMA@kantarainitiative.org<mailto:WG-UMA@kantarainitiative.org>
http://kantarainitiative.org/mailman/listinfo/wg-uma






--
@commonaccord
_______________________________________________
WG-UMA mailing list
WG-UMA@kantarainitiative.org
http://kantarainitiative.org/mailman/listinfo/wg-uma


_______________________________________________
WG-UMA mailing list
WG-UMA@kantarainitiative.org
http://kantarainitiative.org/mailman/listinfo/wg-uma