Many people may still remember the debates a few years ago about the HMQV protocol, a modification of MQV with the primary aim of provable security. Various attacks were later discovered for the original HMQV. In the subsequent submission to the IEEE P1363 standards, the HMQV protocol has been revised to address the reported weaknesses.
However, the revised HMQV protocol is still vulnerable. In a paper that I presented at Financial Cryptography ’10, I described two new attacks. The first presents a counterexample to invalidate the basic authentication feature in the protocol. The second is generally applicable to other key exchange protocols, despite that many have formal security proofs.
The first attack is particularly concerning since the formal security proofs failed to detect this basic flaw. The HMQV protocol explicitly specifies that the Certificate Authority (CA) does not need to validate the public key except checking it is not zero. (This is one reason why HMQV claims to be more efficient than MQV). So, the protocol allows the CA to certify a small subgroup element as the user’s “public key”. Then, anyone who knows this “public key” can successfully pass authentication using HMQV (see the paper for details). Note, in this case, a private key doesn’t exit, but the authentication is successful. What is the “authentication” in HMQV based on?
The HMQV author acknowledges this attack, but states it has no bad effects. Although I disagree, this will be up to the reader to decide.
Updates:
Thank you for pointing us to this paper. However, I must confess I do not understand why you call this an attack. Can you explain what security goal do you believe has been violated? On first glance, it looks to me like the HQMV authors have a valid point.
In your “attack”, Bob can create an insecure public key that allows anyone in the world to pretend to be Bob. I do not immediately see why this is a security violation. How is this different from a situation where Bob creates an ordinary private-public key pair, and then publishes his private key? As far as I can see, it’s not different in any important way. Bob only harms himself by such actions.
It seems to me the key here is to recognize that signatures should be used to assign blame, not to assign original authorship. A reasonable use of a signature is to have a protocol where, if Alice receives a signed message from Bob (i.e., one that verifies as valid using Bob’s public key), then Alice is entitled to act in reliance upon that message as though it was authorized by Bob, and Bob is obligated by that message. If Bob shares his private key with Carol, and Carol sends a message to Alice signed using Bob’s key, then Alice is entitled to act as though it was authorized by Bob (because, after all, it was authorized by Bob: by giving his private key to Carol, he was authorizing Carol to speak on his behalf), and Bob must abide by the terms of that message.
An unreasonable protocol is one that specifies that if Alice receives a message signed by Bob, she should conclude that Bob was the original author of the message. That’s not something signatures can guarantee. Bob can always share his private key with Carol, and Carol might have been the original author. Or maybe Dave was the original author, and Bob stripped off Dave’s signature and applied his own. For a more sophisticated analysis that explains further nuances, see Martin Abadi’s paper “Security Protocols and their Properties”.
In your paper you call this property of HQMV “unsettling”, but I wonder if this reflects a misconception of the security properties provided by digital signatures rather any real flaw in HQMV. There are accepted definitions of what it means for a digital signature scheme to be secure (namely, secure against existential forgery attacks), and that definition in turn specifies the security properties that are provided by a secure digital signature scheme. As a protocol designer, it is reasonable to expect and assume that digital signatures provide every property that is implied by that formal definition, but you should expect or assume anything more than that.
As for non-repudiation, non-repudiation is an infamously tricky subject to think about, which makes me wonder whether your example scenario may be more indicative of counter-intuitive facets of non-repudiation rather than of any flaw in HQMV. For instance, I would question whether any digital signature algorithm provides non-repudiation, in the sense of the word that an ordinary person on the street would understand. Bob can always leak his private key, sign something he wants to later repudiate, then go before the judge and claim that his private key was compromised and present evidence that his private key has appeared on a file-sharing network (or whatever). I’m not sure what would make the “attack” on HQMV in your paper any more valid than my “attack”, and my “attack” applies to every digital signature algorithm under the sun. So it seems unfair to me to criticize HQMV for a “weakness” that, as far as I can tell, is shared by every digital signature scheme in existence.
Do you think that I have overlooked an important point? This is my first exposure to this issue, so I’m trying to reason from first principles. If I missed something, I would welcome an explanation on this blog. Can you explain to me your response to these criticisms?
Actually, I’m a bit surprised that the paper does not anticipate and address these criticisms head-on. I admire your statement that you will leave it up to the reader to decide, but it seems to me the paper places the average reader who is trying to form an informed opinion at a disadvantage, by failing to mention or grapple with what seem to me to be some relevant counter-arguments.
I’m writing anonymously, so that I feel comfortable speaking freely, but I have no association with any author of HQMV or of this paper. (Nor am I Martin Abadi.)
AnonCorrespondent,
wow, you wrote twice the length of my article, let me address your points.
> Can you explain what security goal do you believe has been violated?
The basic definition of authentication in this kind of authenticated key exchange protocol is based on whether the subject knows the private key. Well, you can formalize this in whatever theoretical model, but the essence of this definition doesn’t change.
Given that the authentication is successful in HMQV without the private key even existing, can you tell me how would you define “authentication” in HMQV?
> How is this different from a situation where Bob creates an ordinary private-public key pair, and then publishes his private key?
Good question. This is what most people will ask as the first response. I had a paragraph to explain this but had to remove it due to page limit. If Bob registers a valid public key and later discloses the private key, this should be seen as an implementation blunder. But this won’t contradict any of the security properties of the protocol since the authentication is based on knowing the private key.
For the HMQV case, there is an importance difference: the private key doesn’t exist in the first place, but the authentication is successful. This shows a protocol design error.
Now, some people may point the finger at the CA as it shouldn’t certify an invalid key, but the CA only does what is specified by the HMQV protocol.
> It seems to me the key here is to recognize that signatures should be used to assign blame, not to assign original authorship.
I’m afraid you misunderstood the point. This has nothing to do with the signature or the misuse of the private key. And your rest comments on the non-repudiation of the digital signature are completely irrelevant. The real issue is whether the HMQV protocol provides assurance of “authentication”.
@ Feng Hao,
“For the HMQV case, there is an importance difference: the private key doesn’t exist in the first place, but the authentication is successful. This shows a protocol design error.”
Yes it does and a quite serious one.
It could be argued that this cannot “currently” be exploited but that does not alow for the inevitable “feature creep”.
As a general rule of thumb any protocol that does not correctly identify all states is not going to be a “secure protocol”.
This is for a number of reasons the logic of which tend to be,
A, Long,
B, Assumptive proofs.
(By assimptive proofs I mean that the proof has assumptions as axioms that may only be valid in a limited context)
Thus fairly likley to be error prone.
However you can with a little thought reason that,
Primarily any protocol with an unknown, unrecognised or indeterminate state is wide open to leaking information via side channels.
The issue then becomes how do you stop the side channels being of use to an attacker. Which is an area not very much covered in academic circles currently (afterall who talks about “efficiency -v- security”?)
As I keep saying “side channels are currently the only realistic attack on real systems, where algorithm choice has been reasonable”.
(I’m sure others say similar in different ways 😉
And without a doubt the hardest problem to spot and most expensive to fix is one involving fundemental “protocol errors”. As they are the “ground” on which the “foundations” of any system sits.
@feng
It seems to me that this is actually a natural property of the standard security definitions for multiparty protocols. In these protocols, we consider a number of honest players, who behave as proscribed by the protocol, and an adversary. The adversary is responsible for delivering all of the messages: honest players only react to stimuli provided by the adversary (e.g., delivery of a message, or creation of a new session). Additionally the adversary is permitted to corrupt players (either statically, at the beginning of the game, or dynamically, at any point during the game). A corrupted player simply hands all of his state to the adversary and retires from the game: because all communication is performed via the adversary, he can pretend to be the corrupted player should he so wish, or deviate arbitrarily from the protocol.
In this setting, then, a player who deviates from the protocol, e.g., by registering an invalid public key, must be corrupt. That is, he is being impersonated by the adversary. It is simply meaningless to require that communication from a corrupted player be `authentic’: honest players do not create forgeries by definition, and the adversary is the true originator of the corrupted player’s messages anyway, so they must be authentic according to any reasonable definition of the term.
The counterintuitive behaviour you describe is surely undesirable. But I don’t think its existence is due to any weakness in the multiparty communication model, or to an overly weak model of the adversary; if anything, it’s due to our model of the adversary being too strong. That is, because we view any deviation from the protocol as being part of a massive adversarial conspiracy, it becomes nonsensical to require the protocol to defend this shadowy conspiracy’s security properties: hence, any deviation, however minor, is punished by immediate forfeiture of all security guarantees.
(This sort of thing happens elsewhere in modem `reductionist’ cryptography. It’s very similar to the problem that a block cipher which behaves pathologically when fed its key as an input is still considered secure, since an adversary interacting with it is assumed not to know the key and therefore can’t uncover the pathological behaviour.)
Let’s put aside the formal cryptography for a bit, though. Consider the case of Alice. She’d like to communicate with Bob for some reason, and know that the communication is secret and authentic. Unfortunately, Bob is either malicious or at least incompetent. What, therefore, can we reasonably expect of Alice’s communications with him? Not, I suspect, a great deal. If he;s malicious, then all bets are off. If he’s merely incompetent, then we may anticipate that he’s not good at choosing or protecting his keys. In which case an `authentication failure’ is a definite possibility. (Quotes, because this isn’t the kind of failure that a protocol can defend against.)
As a further point, nonrepudiation is a red herring here. HMQV, like (almost?) all authenticated key-exchange protocols, doesn’t even try to provide nonrepudiation. Alice and Bob engage in the protocol: a shared symmetric key drops out and they use that for securing a communication channel. Since Alice knows this key as well as Bob, she can make up a plausible transcript of the session which is no more or less convincing than the true transcript.
Mark,
> In this setting, then, a player who deviates from the protocol, e.g., by registering an invalid public key, must be corrupt.
A robust cryptographic protocol is all about explicitness. If an invalid public key is not allowed, the protocol specification must clearly state so (which MQV does). HMQV is very ambiguous on this. Of course, in my opinion, the reason that HMQV doesn’t want to be explicit on this is that it will otherwise lose all the claimed efficiency advantages to MQV.
> As a further point, nonrepudiation is a red herring here. HMQV, like (almost?) all authenticated key-exchange protocols, doesn’t even try to provide nonrepudiation.
As I said before, it’s nothing about “non-repudiation” (as in the usual meaning). The real issue is about “authentication”. The fundamental goal in an authenticated key exchange protocol is to provide “authentication” to the key exchange. But, it’s worth noting that despite the over 20 pages of formal security definitions and proofs in the HMQV paper, the paper doesn’t give a clear definition of what is “authentication”. What the attack challenges is exactly the lack of such a definition in the HMQV paper. I hope this makes sense.
@feng hao
I thought the paper was quite clear (section 2):
That paragraph is quite technical (even though it doesn’t sound it) and it’s worth taking it very slowly. It tells us that `each party owns a long-term pair of private and public keys’. If Bob doesn’t have a private key, then he can’t be `a party’. The only entity active in the model other than the parties is the adversary. If he doesn’t have a private key, he must be (an aspect of) the adversary.
It tells us that `a corrupted party can choose … to register any public key of its choice’. Below, we read that `from the moment a party is corrupted all its actions may be controlled by the attacker.’ Again, registering a bogus key is an adversarial act which isn’t performed by honest parties.
And now we get to the important bit. A party cannot meaningfully expect any security properties whatever when communicating with an adversary. Security properties apply only when honest parties are communicating among themselves. This isn’t spelt out anywhere, but it’s readily apparent when you think about the model properly. If you doubt this, try to describe what it means to receive an an authentic message from the adversary.
If you’re going to argue that this is all a bit counterintuitive, or even somewhat unsettling, then I’ll agree with you — but only up to a point, because despite the sharp edges you encounter if you’re not careful, this approach to cryptography works very well when you do use it properly. Unfortunately that takes time and effort — but so does the traditional kind of crypto, and that gave us all kinds of excitingly broken protocols.
> And now we get to the important bit. A party cannot meaningfully expect any security properties whatever when communicating with an adversary.
This is not fair for the end user who relies on the cryptographic protocol to deliver the promised security properties e.g., authentication. A secure protocol should cover all states and detect any misbehaviour as early as possible.
It’s not fair, but that’s just the way life is. The promised security properties obtain for communication in the presence of an adversary; but it’s nonsensical to expect them to hold for communication directly with the adversary. Indeed, you’ve neglected to explain what that even means; I suggested you might try to define this because I don’t think it’s a coherent concept.
Can you tell me is the CA at least partly liable? It knows the party is “corrupted” with an invalid key, but still certifies it. The CA is supposed to be trusted by all!
You may also like to read the NAXOS paper which is “provably secure” in the extended CK model. The NAXOS protocol does the same as HMQV: it only requires the CA to check the submitted public key is not zero. But here is the trick: it mandates that the end user must validate the certified public key before any key exchange. And here is another trick: the cost of that operation is not counted as part of the NAXOS protocol (nor CA operation)…
I came across a paper on the IACR eprint by Qiang Tang, titled “A Reflection on the Security of Two-Party Key Establishment Protocols” (see IACR link here). The paper claims to present some “attacks” on YAK and HMQV (and in fact on all the other authenticated key exchange protocols), and propose a formal model to address all the problems. I have the following comments:
1. The first attack claims to invalidate the “forward secrecy” of YAK. Clearly, the author misunderstood what this term means. The attack he describes trivially breaks any key exchange protocol. I have made it very clear in the paper that “forward secrecy” only makes sense when the past session was *uncorrupted* (more details can be found in the full paper, see p. 7).
2. The second attack claims to be an “unknown key sharing attack” on YAK, HMQV and almost all other key exchange protocols. The commonly understood and accepted meaning of “unknown key sharing attack” is defined by Diffie etc nearly 20 years ago – the attack is essentially a confusion of the entity identity. In this case, the author changed this definition and mixed it up with application level identities used within the entity. What the authenticated key exchange protocol can do (and should do) is to provide a secure channel between the two entities. How the entity manages the multiple channels falls under the realm of session key management. As with any technology, there is a technical boundary what a protocol can achieve.
3. The third attack claims to be a “key compromise impersonation attack” on the HMQV protocol. I don’t know what the HMQV author would say on this, but the attack looks obviously invalid to me. An attacker as powerful as what the author describes can trivially break any protocol. The author doesn’t seem to understand the “extreme adversary principle”.
4. On p. 7, the author claims that the “invalid public key attack” is applicable to all protocols. This is wrong. The attack only applies to HMQV, since HMQV explicitly specifies that the CA doesn’t need to validate the public key.
There are other mistakes in the paper.
Wouldn’t it be nice if the author had asked people in the relevant field for a courtesy review of his manuscript before publishing it on eprint?
ps. I have also sent an email to the author of the paper. He may post comments, if any, here.