Hmm. It occurs to me that the response to your question No. 1 was incomplete, coming first as it does:

====
(1) How does the client obtain the issuer string to bootstrap? [attack surface 1] – there are real examples for this kind of attack being successful.

This is about the evil fedauthz attack; please see below for more.
====

Justin's response in a previous thread could be prepended here to explain how we get to a conclusion about this being a concern about evil fedauthz:

====
This WG already has a middle ground: each stage of the process carries (or at least, can carry) a pointer to the next stage. Over TLS, this is pretty solid and in line with what you’d previously proposed OAuth do.

The RS tells the client where the AS is — this points to a discovery document, but it’s not externally configured, at least, it comes from the RS.

The token endpoint at the AS can tell the client where the claims endpoint is. This closes the mix-up attack that you mentioned previously. This is currently an option (which I argued for) but I would say it’s even best practice given the mix up attack. 

From there, the client isn’t really susceptible to other substitution attacks like in the mix-up attack, unless I’m missing an attack surface.

The security considerations could mention the mix-up attack in a discussion of returning the claims endpoint from the token endpoint response, but I don’t think the protocol needs to change to address it.
====

(Our swimlane tries to demonstrate that Justin isn't, in fact, "missing an attack surface".)

So if we're to stipulate (by Justin's mention of TLS and the logic in the rest of my note) that the channel is trustworthy, then for the provisioning of the as_uri value not to be trustworthy, you have to assume a colluding RS with an evil AS-like service. (BTW, I don't think a good RS dealing with a singular AS could suddenly be fooled by a "code phishing" attack because it's provisioned with all the AS's endpoints through (extended) OAuth Meta as well -- as long as FedAuthz is being used.)


Eve Maler
Cell +1 425.345.6756 | Skype: xmlgrrl | Twitter: @xmlgrrl


On Fri, Dec 29, 2017 at 10:27 AM, Eve Maler <eve@xmlgrrl.com> wrote:
Nat, thanks for the continued dialogue in email! (All WG folks, fyi, we're continuing to seek a special meeting time, but we may not be able to find one until ~Jan 9-11, or possibly Jan 5 after our next WG telecon. In the meantime, perhaps we can find full consensus in email prior to then.)

Re: short-circuit:

Yep, the group concurred with your conclusion about the mitigation options. (And Kantara is pretty much already in this business, so I think they'd stand ready to help here!) The Grant spec already discusses trust establishment and profiling explicitly for enhancing security (and privacy), and the FedAuthz spec both inherits all the Grant spec's security and privacy considerations and also reiterates the importance of "establishing agreements" about the respective considerations in the context of its looser coupling.

Regarding looking for all attack surfaces by a non-participant, that sounds like a full security analysis, which is not currently on our docket to do (but may be performed eventually). However, one of the motivations of aligning much more closely with OAuth itself in the UMA2 work was to get the benefits of existing OAuth security. As far as we've seen in all of our analyses to date, it has had a positive impact in terms of (UMA) client security, the OAuth-secured protection API, the basics of the get token/use token flow, and so on.

The main UMA innovation, beyond the protection API, is the prefix to that flow, the authorization process with permission ticket passing, which in UMA1 had a potential session fixation attack à la OAuth 1.0. UMA2 rotates the permission ticket, averting the attack.

In our telecon of Dec 21, we discussed and answered the previous questions you posed:

(1) How does the client obtain the issuer string to bootstrap? [attack surface 1] – there are real examples for this kind of attack being successful.

This is about the evil fedauthz attack; please see below for more.

(2) Has the DNS spoofing against the client for issuer combined with falsely issued certificate been considered? [attack surface 2] – there has been a real attack like this.

UMA already adopts OAuth's security considerations and TLS usage and is defined as an OAuth grant (and FedAuthz explicitly requires TLS usage). There's no need for us to add a specific security consideration around this because there's nothing in UMA that's unique relative to OAuth (or even other TLS-using protocols).

(3) What kind of agent a typical RqP using? If it is a browser, then browser infection to rewrite the request/response is possible. [attack surface 3] – this is a rather common attack.

UMA works with clients of both public and confidential types (Grant Sec 3.3). Different security levels can thus be achieved if you're using the UMA grant, similarly to choosing different grants in base OAuth. The browser environment obviously raises the possibility of various attacks, which is why there is so much effort in the OAuth security considerations put into that environment. The same security considerations apply.

(As I think I already noted, some UMA profiles have already been developed that include security considerations, including client types. This question is always a big tension, of course.)

Re: evil fedauthz:

The RS has an opportunity for attack as soon as it hands out the as_uri. (Our analysis of the UMA multi-endpoint mix-up attack shows that the claims_redirect_uri is not a way in.) This means it's standing up or working with an AS-like service that will respond to all the fake endpoints in the fake discovery document.

You pointed to your interesting blog post where you analyzed a code phishing attack. This does look to be similar to evil fedauthz because the provisioning of OAuth's "second endpoint" can happen at some time prior to the client's usage of the first endpoint. Everybody, I encourage you to read!

Here's my take: It differs from UMA in that the RS is the official, in-band provisioner (indirectly) of the OAuth discovery document (where the two endpoints of interest could potentially be used in either order, only one endpoint could be provisioned/used, etc., as outlined in the analysis swimlane above). If it's evil, it can point to a doctored as_uri, but then all the endpoints have to be "wrong", or the evil service won't have a chance to interrupt the client's communications with the proper AS because the permission ticket gets rotated every time. (If the client brings an access token to the RS that it subsequently ignores, we're back to the short-circuit attack.) You have mentioned DNS poisoning and lack of TLS protection making this attack easier. As noted, TLS is already required and all OAuth security protections are inherited, so the group's feeling is that saying more is not really in scope because these are not UMA-unique considerations.

By the way, as I already noted somewhere along the way, the emerging "distributed OAuth" I-D would also potentially be open to a similar trust attack (but tightly coupled OAuth could also be open to it -- it would just be invisible). I'm starting to think that having a formal federated authorization spec, with an OAuth-protected API for communication in an RO context, gives the best or at least most visible/auditable mitigation path.

====

Your earlier emailed questions concluded with:

It really depends on the answer, but unless there are specific considerations to block the surface, the answers to the questions “Can attacker inject false claims interaction endpoint” and “Does attacker receive claims redirection URI” seem to be both YES to me.

I understand what you mean; note that the arrows' locations were meant to be significant in answering your original submitted comment ("To cope with Mix-up attack etc., the claims_redirect_uri MUST be unique per AS and the authorization request MUST include it."). We developed the analysis of evil fedauthz in response to it being the only vector for a bad claims interaction endpoint.

The group has made a case that the claims_redirect_uri can't be the vector for a mix-up, because by then the client is already working with either an entire set of good endpoints, or an entire set of bad endpoints. The contention at this point is that the short-circuit and evil fedauthz attacks don't need further mitigation in the specs (it's okay assume TLS and no DNS poisoning, and current considerations around trust establishment and profiling are sufficient). This is the current key for moving forward right now.

...

Looking at the happy accident of OAuth Meta for solving the code phishing attack, on the one hand, I don't think the mitigation would apply to UMA2 since we don't use the authorization endpoint as such, but on the other hand, I don't think the attack applies to UMA2 either. The topic of OAuth Meta/discovery documents does make me wonder if future improvements to packaging of discovery data could somehow cleverly mitigate the as_uri problem.

Eve Maler
Cell +1 425.345.6756 | Skype: xmlgrrl | Twitter: @xmlgrrl


On Thu, Dec 28, 2017 at 9:52 PM, n-sakimura <n-sakimura@nri.co.jp> wrote:

Re: OAuth/UMA short-circuit attack

If I understand correctly, this is an attack where the RS is bad. At that point, we cannot do much, and as it is rightly pointed out, it is more of a trust issues. We could utilize third party auditing, reputation services, etc. to form a better trust. (hint: a new Kantara initiative business?)

 

From the security point of view, it may be more interesting to think of a network attacker that is trying to insert itself between the client, user-agent, RS, and AS which are all good. (I am not saying this would be possible.)

 

Re: Evil federated authorization attack

 

I am still trying to understand the attack, but Mechanism 1 seems to be pretty similar to what I described as “code phishing attack”[1] in my blog, though what is being stolen is different.

 

[1] https://nat.sakimura.org/2016/01/22/code-phishing-attack-on-oauth-2-0-rfc6749/

 

Mechanism 2 seems to be more involved. One of the possible method is for the attacker to DNS poison the client. The easiness depends on the location where the client resides. If the client resides on a mobile device, then setting up a malicious Wi-Fi hotspot would go a long way. If it is a server based one, then one has to crack the DNS server that client is using, which is more challenging but not impossible to do. Typically, one can DoS the real server to shut it up and set up a new one with the same IP address. (Of course, you have to get into the same segment as the DNS server to begin with.) Now, this could work even if the RS is a good one. The attacker can create a Bad RS that disguise as Good RS. Of course, all these is pretty hard if all the endpoints are TLS protected. The attacker is required to obtain a certificate for it (but we all know we can.) If RS is not TLS protected, it will be much easier.

 

 

Nat Sakimura

 

PLEASE READ :This e-mail is confidential and intended for the named recipient only. If you are not an intended recipient, please notify the sender and delete this e-mail.

 

 

 

From: WG-UMA [mailto:wg-uma-bounces@kantarainitiative.org] On Behalf Of Eve Maler
Sent: Friday, December 29, 2017 6:01 AM
To: wg-uma@kantarainitiative.org WG <wg-uma@kantarainitiative.org>
Subject: Re: [WG-UMA] Trying to analyze "trust attacks"

 

Tim and I were just having an ad hoc UMA Legal call today, and I was catching him up on this discussion. Some further thoughts:

 

First, it occurs to me that the evil federated authorization attack doesn't have to be applied indiscriminately; the evil parties could also collude to detect a favored client and give access only to it, vs. applying the special flow all the time. However, it still seems like a high-cost and high-friction mechanism to use, vs. applying the short-circuit attack (which evil insiders with access to the RS's OAuth or UMA code could manage on their own without going through the expense of checking access tokens).

 

Second, apropos of the point that "Mitigation would be in the realm of access auditing, implementation auditing, deployment auditing, trust establishment with the AS and AS/RO, and RS accreditation. Note that adding an endpoint at the AS to enable the RS to report access given would be a self-reporting function." ... our joint call with the CIS WG (and hoped-for follow-on work) is relevant, as is our business model/legal framework effort. We have already begun mapping UMA technical artifacts to legal devices, and identifying junctures where audit trails/receipts are possible. I think there's actually a way to mitigate this (unlikely?) trust attack somewhat, using a receipts/technology/UX approach:

 

We had suggested in our call on Dec 14 that the PAT might be involved in a mitigation of the evil fedauthz attack. And the Legal calls have identified request/response calls between (legitimate) technical entities as some of the artifacts that can be audited -- think of them as subject to being "receipt-able". The wanting-to-be-evil RS did have to get a PAT with the real AS at some point, authorized by the RO. We believe the only way to get an evil AS involved is for the RS to start with it through a bad as_uri; our rotating permission ticket protects later client-AS interactions.

 

What if both the legitimate AS and the wishes-it-were-evil RS have to send real-time notifications to the RO (or their "shoebox"/chosen monitoring service) when the RS uses the protection API due to client resource requests (getting initial permission tickets issued and possibly introspecting tokens)? If they're both acting legitimately, the notifications should be identical/complementary. You can't prove a negative (such as if the RS never uses the real AS and just always gives access through the evil one), but this is another layer of checking that could be audited for, have implementation and deployment accreditation done for, etc.


 

Eve Maler
Cell +1 425.345.6756 | Skype: xmlgrrl | Twitter: @xmlgrrl

 

 

On Wed, Dec 20, 2017 at 12:59 PM, Eve Maler <eve@xmlgrrl.com> wrote:

In last week's WG call, we discussed the potential for a "mix-up attack" on UMA; please check out my new analysis swimlane, where the conclusion so far is that this is not a viable attack.

 

We also started discussing a different sort of attack where a bad RS might be colluding with a bad AS-type service behind the back of the real AS. After doing a bunch of thinking and analysis of this situation and another similar one, I'm starting to think of them -- not so much as security attacks -- but as "trust attacks". Let me describe them in turn, newest one first.

 

OAuth/UMA short-circuit attack

  • What needs protection? RO's AS-protected resources at RS
  • Entities to protect them from? Colluding RS and client (and possibly RqP using the client), against a naive AS and RO
  • Likelihood of attack? Higher for bespoke implementations, lower for off-the-rack/third-party
  • implementations; for OAuth, requires an RS "insider attack" against a naive AS, while for UMA (or other "distributed OAuth") it's potentially likelier in the case of an RS in a distinct domain from an AS (FedAuthz loose coupling)
  • Worth the effort to mitigate? Technical means through OAuth/UMA don't seem possible, so all methods are relatively "soft"
  • Seriousness of consequences if you fail? Very high since the RS can give specific recipients access to specific resources

Here, the RS has received an access token during a client's resource request that has scopes (or in the case of UMA, permissions) directing it to give access only to extent X, but -- based on its out-of-band recognition of the client (and/or the requesting party in the case of UMA) -- it decides to go ahead and give further access illegitimately to extent Y.

 

Mitigation would be in the realm of access auditing, implementation auditing, deployment auditing, trust establishment with the AS and AS/RO, and RS accreditation. Note that adding an endpoint at the AS to enable the RS to report access given would be a self-reporting function.

 

Most known UMA ecosystem types to date are relatively "classic" on the server side and involve statically established "identity federation" style trust between the AS and RS (if the two aren't already colocated); this means the RS has to get OAuth client credentials from the AS and probably agree to an associated contract. The RS then gets a PAT from the AS by authorization of the RO. 

 

(I haven't seen this one discussed before, e.g. in existing threat docs. Has anyone else?)

 

This attack could be really effective, if Armageddon-y, and if the RS applied it in a very selective way, it's likely it could get away with it for a longer time. A perhaps analogous example of what may have been "lower-level employees" (insiders) of multiple related companies getting away with price-fixing on a single product type for 14 years just came to light due to auditing.

 

I don't see anything worth adding to the specs that we haven't already said in our profiling and trust establishment sections. We could add this analysis to a new threat model document if we think it even counts as a true security threat.

 

Evil federated authorization attack

  • What needs protection? RO’s AS-protected resources at RS? RqP's claims containing personal data?
  • Entities to protect it from? RS(bad) and AS(bad) colluding, against all the other entities, which are uninvolved?
  • Likelihood of attack? See discussion
  • Worth the effort to mitigate? See discussion
  • Seriousness of consequences if you fail? See discussion

Here, the evil RS has a properly formed trust relationship with the true AS, but it has a companion server-side attacker with which it colludes to achieve...I'm not sure what ends yet.

 

I can see two mechanisms of attack, as outlined below. I haven't yet made swimlanes for this, but perhaps I can do so before the call tomorrow.

 

Mechanism 1: Bad actors replace the discovery endpoints wholesale

  • The bad actors construct and host a replacement discovery document with false endpoints.
  • RS(bad) provides a false as_uri value when clients make tokenless resource requests.
  • Thereafter, RS(bad) and AS(bad) are the only server-side entities interacting with Client(uninvolved) and RqP(uninvolved) throughout every authorization process.
  • If RqP(uninvolved) Bob is redirected to AS(bad) for interactive claims gathering, can he tell that it's not AS(good)? Generally it’s not considered reliable for humans to detect this, so let's assume not. AS(bad) could steal Bob's personal data.

Discussion: The uninvolved requesting-side entities will presumably get any access they request, and RO(good) and AS(good) are none the wiser. Also, AS(bad) got RqP(uninvolved)'s claims. But RS(bad) has presumably violated its terms of service, which may eventually be discovered through auditing etc., and this seems like an awful lot of trouble to go to vs. the OAuth/UMA short-circuit attack above, since "random" recipients are being given access to "random" resources rather than in a targeted fashion. In other words, the RS is partnering with another service at great cost and expense to be not just a bad guy but an overall jerk, vs. seeking out specific ways to get the resources out to desired parties on the sly.

 

Mechanism 2: AS(bad) functions as a MITM proxying requests to AS(good)

 

Discussion: Again, it all starts with the RS's conveyance of the as_uri value. If AS(bad) has to proxy requests to and responses from AS(good), at what point does it actually get to inject something or steal something? If it provides a false rotated permission ticket, that will fail at the client's next request. It could provide a fake RPT, but it wouldn't work. It could provide a need_info error that asks the client to push more claim information than needed. But since this type of information is most likely to be pre-negotiated a la SSO attributes, it may not be understood (and we already have discussions of that and security and privacy considerations around it in the spec). Looking at the AS endpoint responses, I can't think of anything else damaging it could do.

 

Regarding this whole second "trust attack", even though it's not fully analyzed here, I can't help feeling so far that it's not something that warrants editing the specs further. I would very much welcome input from others.

 

Eve Maler
Cell +1 425.345.6756 | Skype: xmlgrrl | Twitter: @xmlgrrl

 

 


_______________________________________________
WG-UMA mailing list
WG-UMA@kantarainitiative.org
https://kantarainitiative.org/mailman/listinfo/wg-uma