Gotcha. I’ve amended as follows: http://kantarainitiative.org/confluence/display/uma/UMA+telecon+2015-06-25 … The earliest stuff in the spec is the "least interesting" from an UMA perspective. But the most core UMA stuff needs tester intervention. This actually gets into the question of natural variability of use cases that exercise different options in the spec, such as claims-gathering from human requesting parties vs. autonomous web service clients. Could we use "profiles" to distinguish these for testing purposes? Justin suggests that the UMA testing platform could define a set of claims to test against — like “get an OIDC token from a known provider with a known email address”, pointing to a shared/test account. There appears to be interest in doing this. If the claim types are of wide enough interest eventually for actual deployment ecosystems, somebody could write up a claim profile for them. Does it make sense to bat around a single test (or test of related tests) on email? There are no juicy in-person opportunities coming up until IIW, by which time we hope to have a test suite ready to try out. Here's <https://rp.certification.openid.net:8080/test_list> how spec references might look, in the form of the emerging OIDC RP test suite. (The OP tests didn't have this, and it led to endless queries.) … Let me know if anything needs any further correction.
On 25 Jun 2015, at 10:58 AM, Justin Richer <jricher@mit.edu> wrote:
Clarification that I made on the call that didn’t make it to the notes: I was not suggesting that UMA define its own set of attributes. Instead, I was saying that the UMA testing platform could define a set of claims to test against — like “get an OIDC token from a known provider with a known email address”, pointing to a shared/test account. The comparison with OIDC was that even though there are a lot of domain-specific claims out there, OIDC defines a set of core ones and their semantics.
— Justin
On Jun 25, 2015, at 1:23 PM, Eve Maler <eve@xmlgrrl.com <mailto:eve@xmlgrrl.com>> wrote:
http://kantarainitiative.org/confluence/display/uma/UMA+telecon+2015-06-25 <http://kantarainitiative.org/confluence/display/uma/UMA+telecon+2015-06-25> Minutes Our next call will be at the APAC-friendly time. Roll call Quorum was reached. Minutes approval MOTION: Andi moves: Approve the minutes of UMA telecon 2015-05-28 <http://kantarainitiative.org/confluence/display/uma/UMA+telecon+2015-05-28> and UMA telecon 2015-06-04 <http://kantarainitiative.org/confluence/display/uma/UMA+telecon+2015-06-04>, and read into the minutes the notes of UMA telecon 2015-06-11 <http://kantarainitiative.org/confluence/display/uma/UMA+telecon+2015-06-11>. APPROVED by unanimous consent. Plenary report Kantara's work streams are increasingly significant. The two streams are Connected Life (where UMA sits), and Trust Services. The IRM and Identities of Things work streams are very cool, and Consent Receipts has major synergies with our work. The presentations were of high quality. Links to materials are coming soon. Eve gave an UMA wireframe demo using a "connected car" use case. The Twitter stream from the plenary is here <https://twitter.com/search?q=%23trustkantara&src=typd>. Interop tests Since the last time we talked about this, the OIDC test suite actually launched. Thinking in terms of all of UMA, OIDC, SAML, and OAuth, it's useful to come up with a lowest common denominator. Roland has in mind a common platform, with branches that are specific to each of the four. In the last week, he's been preparing a new version of the OIDC suite – "V2.0". He'll be ready to move to the other protocols soon. A SAML project is getting going, with Internet2, the Austrian government, and others contributing to the costs. The actual writing of the test suite is less work than test design and result interpretation. Rushing to implementation may almost work against our goals. Since a standard is written in natural language, a test suite "of record" ultimately becomes kind of a harder-edged version of the spec because it's machine-readable. It's "the spec" as far as implementers using the spec are concerned. Our goal is not to do what the OIDC conformance test suite developers did in a few cases, which was to make the test suite tighter than the specs. If we find any deltas in interpretation that require changes to the spec, we will go through the appropriate process to change it. But our goal is consistency. How should we prioritize? What entity/ies should we test first? The AS is the target Roland and Eve were thinking of first. The RS is perhaps another candidate since it's server-side, though it shares the problem that OAuth has always had regarding interop testing, in that most of its interface is entirely unstandardized. The earliest stuff in the spec is the "least interesting" from an UMA perspective. But the most core UMA stuff needs tester intervention. This actually gets into the question of natural variability of use cases that exercise different options in the spec, such as claims-gathering from human requesting parties vs. autonomous web service clients. Could we use "profiles" to distinguish these for testing purposes? Justin suggests standard sets of attributes a la OIDC for UMA. Mike points out that the OIDC claims are LDAP-unfriendly due to their use of underscores. Eve points out that OIDC claims are available for standardized exchange in UMA today. <smile.png> So are we interested in a interop testing profile so we can just "get on with it" as an interop simplifying assumption? It sounds like that's where our heads are at for now. Does it make sense to bat around a single test (or test of related tests) on email? There are no juicy in-person opportunities coming up until IIW, by which time we hope to have a test suite ready to try out. Here's <https://rp.certification.openid.net:8080/test_list> how spec references might look, in the form of the emerging OIDC RP test suite. (The OP tests didn't have this, and it led to endless queries.) August/September will be the active test suite development period for the WG. Outstanding AIs AI: Thomas: Review the charter for potential revisions in this annual cycle. AI: Marcelo: Review the Wikipedia page(s) for potential revision (multiple languages). DONE. Eve can take the comments and revise the English, from which others can revise the other language. AI: Tim: Expound on the licensing idea in email. AI: Sal: Investigate IP implications of formal liaison activities with other Kantara groups with the LC, and ultimately draft an LC Note as warranted. AI: Gil: Edit the UIG to add Ishan's content and excerpt it for Eve to add to the FAQ, pointing everyone to the UIG. AI: Sal: Fill out IDESG form to have UMA adopted as a recommended standard for use in the IDESG framework. AI: Mike: Rework UIG section on organizations as ROs and RqPs. AI: Eve: Update GitHub. AI: Maciej: Write as many sections for the UIG as he can. AI: Justin: Write a UIG section on default-deny and race conditions. Attendees As of 11 Jun 2015, quorum is 8 of 14. Eve Arlene Mordeno - attended CIS UMA preso - involved in CSA and IAM generally Sal Mark D Domenico Marcelo Andi Mike Maciej Thomas François Ishan Non-voting participants: Sarah Justin Roland Abhi - now based in the US! Katie
Eve Maler | cell +1 425.345.6756 | Skype: xmlgrrl | Twitter: @xmlgrrl | Calendar: xmlgrrl@gmail.com <mailto:xmlgrrl@gmail.com>
_______________________________________________ WG-UMA mailing list WG-UMA@kantarainitiative.org <mailto:WG-UMA@kantarainitiative.org> http://kantarainitiative.org/mailman/listinfo/wg-uma
Eve Maler | cell +1 425.345.6756 | Skype: xmlgrrl | Twitter: @xmlgrrl | Calendar: xmlgrrl@gmail.com