Sharing tickets between multiple instances of stud
Hi!
My previous pull requests was about sharing sessions between multiple instances of stud using memcached. In this case, use of TLS tickets was disabled because they cannot be shared between multiple instances (keys are randomly generated when SSL context is initialized).
With this patch, keys used to protect and encrypt TLS tickets are generated by signing some seed provided by the user with the private key. It does not really depends on the previous commits but it is a bit useless standalone (because there is a lot of clients not supporting tickets). Moreover, it alters the memcached patch by allowing tickets if this feature is enabled.
Since the memcached patch is a bit rusty, there is no hurry to merge this pull request. I just make it available to gather comments on this kind of feature. I did not find such a feature in any web server. It is an exclusive for stud ! ;-)
Interesting stuff... I'll have to read up on all this.
There is more information about tickets here: http://vincent.bernat.im/en/blog/2011-ssl-session-reuse-rfc5077.html
OK, I have rebased this branch on top of Emeric's branch (pull request 50). You should only consider the last 3 commits (mine). The first commit use the shared secret (20 bytes) to build the keys used to protect tickets (48 bytes but only 32 should be considered secrets since the 16 first one are key name). I consider this is a bit weak to build the secret this way.
The second commit replaces the use of SHA1 by SHA384. We get a 48 bytes secret that we can use to protect tickets. However, this may hinder the performances of session sharing between multiple hosts (but I think this does not really matter, SHA384 is a lot faster than public key operations).
The third commit is more invasive but compute a shared secret with SHA384 and only uses the first 20 bytes with SHA1 to protect session sharing between nodes. This should remove the performance hit introduced by the previous commit.
Good for me!
Rebased on top of emeric/UpdateSHP (cc @emericBr)
OK, the new method is worse than the old one. The problem with sharing tickets this way is that we break forward secrecy. If the user enables a DHE or a ECDHE cipher suite thinking that it will get forward secrecy, we just break this by encrypting tickets using something directly derived from the private key. When I was using a user provided seed, this was better: the private key does not allow alone to decrypt tickets, but the seed may be as unsecure (appears on the command line, may be stolen with the key, may be too short). Therefore, this is not the good method either.
Since Emeric introduced a protocol to exchange sessions, I think that we can extend it a bit to share keys. Each node will generate at regular intervals a new key it will broadcast to other nodes. A key contains a keyname to ensure that we know which key was used to protect a ticket. Moreover, a node should wait 1-5 seconds before using a newly generated key (to ensure other nodes know it). Each node will use its own key to protect a ticket but will know keys from other nodes. A key will be expired after some timer. Therefore, the protocol can be very simple: we just need to broadcast keys.
But when a process receive a new request how choose the right key? In addition, sub process openssl ctx is not in shared memory, so if your reveive update keys or recreate key you need to store the keys in a shared memory and all sub-process will need to lock shared memory to check if a new key is available on each request (there is a cost to do that). I don't see any callback in openssl to call user space when a new connection appear using a ticket.
The first 16 bytes are the name of the key. The next 16 are an IV and the remaining is the ticket while the last 32 bytes are a MAC construct. This is not well documented but we can define a custom callback to handle tickets:
/* Callback to support customisation of ticket key setting */
int (*tlsext_ticket_key_cb)(SSL *ssl,
unsigned char *name, unsigned char *iv,
EVP_CIPHER_CTX *ectx,
HMAC_CTX *hctx, int enc);
enc is 1 when we need to create the ticket (name, iv, ectx and hctx are output variables) and 0 when we need to check it (name, iv, ectx and hctx are input variables).
About shared memory, I haven't thought of it. Write will be pretty rare. We could replicate tickets in all processes. Or use some RCU mechanism.
That is what we need. I think the best way is to generate anew key each N callsof callback creation.
Do you know if ticket have a limited validity time?
Per RFC 5077, there is a lifetime for each ticket (enforced on server
side). It is encoded in the first 4 octets. In OpenSSL, this is encoded
in tlsext_tick_lifetime_hint in the session. I haven't found any code
that check its value.
I think it would be easier for the user if we allow to configure how long a key is valid. For example, a user could ask to generate a new key every hour. We keep 5 keys. This means that with n peers, we need to keep 5*n keys in memory and a ticket is valid 5 hours. Memory bound and time bound.
If we go for a new key each N calls of callback creation, we need to ensure that a ticket is valid 5 (for example) hours and therefore, we could have to store a lot of keys. There is no universal value. A busy server could handle 3 millions of handshake per hour and a not busy one a thousand. Seems more difficult to configure.
Vincent and Emeric, I'm not pulling this in until it seems like you guys have consensus on the right approach. (but I am monitoring this thread)
We have a consensus. The problem is that we need to implement it. ;-)
What's the status of this PR?
The SSL_CTX_set_tlsext_ticket_key_cb is pretty well documented now after they took my document out of their patch graveyard system https://github.com/openssl/openssl/blob/master/doc/ssl/SSL_CTX_set_tlsext_ticket_key_cb.pod