Tim and I were just having an ad hoc UMA Legal call today, and I was catching him up on this discussion. Some further thoughts:

First, it occurs to me that the evil federated authorization attack doesn't have to be applied indiscriminately; the evil parties could also collude to detect a favored client and give access only to it, vs. applying the special flow all the time. However, it still seems like a high-cost and high-friction mechanism to use, vs. applying the short-circuit attack (which evil insiders with access to the RS's OAuth or UMA code could manage on their own without going through the expense of checking access tokens).

Second, apropos of the point that "Mitigation would be in the realm of access auditing, implementation auditing, deployment auditing, trust establishment with the AS and AS/RO, and RS accreditation. Note that adding an endpoint at the AS to enable the RS to report access given would be a self-reporting function." ... our joint call with the CIS WG (and hoped-for follow-on work) is relevant, as is our business model/legal framework effort. We have already begun mapping UMA technical artifacts to legal devices, and identifying junctures where audit trails/receipts are possible. I think there's actually a way to mitigate this (unlikely?) trust attack somewhat, using a receipts/technology/UX approach:

We had suggested in our call on Dec 14 that the PAT might be involved in a mitigation of the evil fedauthz attack. And the Legal calls have identified request/response calls between (legitimate) technical entities as some of the artifacts that can be audited -- think of them as subject to being "receipt-able". The wanting-to-be-evil RS did have to get a PAT with the real AS at some point, authorized by the RO. We believe the only way to get an evil AS involved is for the RS to start with it through a bad as_uri; our rotating permission ticket protects later client-AS interactions.

What if both the legitimate AS and the wishes-it-were-evil RS have to send real-time notifications to the RO (or their "shoebox"/chosen monitoring service) when the RS uses the protection API due to client resource requests (getting initial permission tickets issued and possibly introspecting tokens)? If they're both acting legitimately, the notifications should be identical/complementary. You can't prove a negative (such as if the RS never uses the real AS and just always gives access through the evil one), but this is another layer of checking that could be audited for, have implementation and deployment accreditation done for, etc.


Eve Maler
Cell +1 425.345.6756 | Skype: xmlgrrl | Twitter: @xmlgrrl


On Wed, Dec 20, 2017 at 12:59 PM, Eve Maler <eve@xmlgrrl.com> wrote:
In last week's WG call, we discussed the potential for a "mix-up attack" on UMA; please check out my new analysis swimlane, where the conclusion so far is that this is not a viable attack.

We also started discussing a different sort of attack where a bad RS might be colluding with a bad AS-type service behind the back of the real AS. After doing a bunch of thinking and analysis of this situation and another similar one, I'm starting to think of them -- not so much as security attacks -- but as "trust attacks". Let me describe them in turn, newest one first.

OAuth/UMA short-circuit attack
  • What needs protection? RO's AS-protected resources at RS
  • Entities to protect them from? Colluding RS and client (and possibly RqP using the client), against a naive AS and RO
  • Likelihood of attack? Higher for bespoke implementations, lower for off-the-rack/third-party
  • implementations; for OAuth, requires an RS "insider attack" against a naive AS, while for UMA (or other "distributed OAuth") it's potentially likelier in the case of an RS in a distinct domain from an AS (FedAuthz loose coupling)
  • Worth the effort to mitigate? Technical means through OAuth/UMA don't seem possible, so all methods are relatively "soft"
  • Seriousness of consequences if you fail? Very high since the RS can give specific recipients access to specific resources
Here, the RS has received an access token during a client's resource request that has scopes (or in the case of UMA, permissions) directing it to give access only to extent X, but -- based on its out-of-band recognition of the client (and/or the requesting party in the case of UMA) -- it decides to go ahead and give further access illegitimately to extent Y.

Mitigation would be in the realm of access auditing, implementation auditing, deployment auditing, trust establishment with the AS and AS/RO, and RS accreditation. Note that adding an endpoint at the AS to enable the RS to report access given would be a self-reporting function.

Most known UMA ecosystem types to date are relatively "classic" on the server side and involve statically established "identity federation" style trust between the AS and RS (if the two aren't already colocated); this means the RS has to get OAuth client credentials from the AS and probably agree to an associated contract. The RS then gets a PAT from the AS by authorization of the RO. 

(I haven't seen this one discussed before, e.g. in existing threat docs. Has anyone else?)

This attack could be really effective, if Armageddon-y, and if the RS applied it in a very selective way, it's likely it could get away with it for a longer time. A perhaps analogous example of what may have been "lower-level employees" (insiders) of multiple related companies getting away with price-fixing on a single product type for 14 years just came to light due to auditing.

I don't see anything worth adding to the specs that we haven't already said in our profiling and trust establishment sections. We could add this analysis to a new threat model document if we think it even counts as a true security threat.

Evil federated authorization attack
  • What needs protection? RO’s AS-protected resources at RS? RqP's claims containing personal data?
  • Entities to protect it from? RS(bad) and AS(bad) colluding, against all the other entities, which are uninvolved?
  • Likelihood of attack? See discussion
  • Worth the effort to mitigate? See discussion
  • Seriousness of consequences if you fail? See discussion
Here, the evil RS has a properly formed trust relationship with the true AS, but it has a companion server-side attacker with which it colludes to achieve...I'm not sure what ends yet.

I can see two mechanisms of attack, as outlined below. I haven't yet made swimlanes for this, but perhaps I can do so before the call tomorrow.

Mechanism 1: Bad actors replace the discovery endpoints wholesale
  • The bad actors construct and host a replacement discovery document with false endpoints.
  • RS(bad) provides a false as_uri value when clients make tokenless resource requests.
  • Thereafter, RS(bad) and AS(bad) are the only server-side entities interacting with Client(uninvolved) and RqP(uninvolved) throughout every authorization process.
  • If RqP(uninvolved) Bob is redirected to AS(bad) for interactive claims gathering, can he tell that it's not AS(good)? Generally it’s not considered reliable for humans to detect this, so let's assume not. AS(bad) could steal Bob's personal data.
Discussion: The uninvolved requesting-side entities will presumably get any access they request, and RO(good) and AS(good) are none the wiser. Also, AS(bad) got RqP(uninvolved)'s claims. But RS(bad) has presumably violated its terms of service, which may eventually be discovered through auditing etc., and this seems like an awful lot of trouble to go to vs. the OAuth/UMA short-circuit attack above, since "random" recipients are being given access to "random" resources rather than in a targeted fashion. In other words, the RS is partnering with another service at great cost and expense to be not just a bad guy but an overall jerk, vs. seeking out specific ways to get the resources out to desired parties on the sly.

Mechanism 2: AS(bad) functions as a MITM proxying requests to AS(good)

Discussion: Again, it all starts with the RS's conveyance of the as_uri value. If AS(bad) has to proxy requests to and responses from AS(good), at what point does it actually get to inject something or steal something? If it provides a false rotated permission ticket, that will fail at the client's next request. It could provide a fake RPT, but it wouldn't work. It could provide a need_info error that asks the client to push more claim information than needed. But since this type of information is most likely to be pre-negotiated a la SSO attributes, it may not be understood (and we already have discussions of that and security and privacy considerations around it in the spec). Looking at the AS endpoint responses, I can't think of anything else damaging it could do.

Regarding this whole second "trust attack", even though it's not fully analyzed here, I can't help feeling so far that it's not something that warrants editing the specs further. I would very much welcome input from others.

Eve Maler
Cell +1 425.345.6756 | Skype: xmlgrrl | Twitter: @xmlgrrl