[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <125709.1607100601@warthog.procyon.org.uk>
Date: Fri, 04 Dec 2020 16:50:01 +0000
From: David Howells <dhowells@...hat.com>
To: Bruce Fields <bfields@...ldses.org>
Cc: dhowells@...hat.com, Chuck Lever <chuck.lever@...cle.com>,
CIFS <linux-cifs@...r.kernel.org>,
Linux NFS Mailing List <linux-nfs@...r.kernel.org>,
Herbert Xu <herbert@...dor.apana.org.au>,
netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
Trond Myklebust <trond.myklebust@...merspace.com>,
linux-crypto@...r.kernel.org, linux-fsdevel@...r.kernel.org,
linux-afs@...ts.infradead.org
Subject: Re: Why the auxiliary cipher in gss_krb5_crypto.c?
Bruce Fields <bfields@...ldses.org> wrote:
> OK, I guess I don't understand the question. I haven't thought about
> this code in at least a decade. What's an auxilary cipher? Is this a
> question about why we're implementing something, or how we're
> implementing it?
That's what the Linux sunrpc implementation calls them:
struct crypto_sync_skcipher *acceptor_enc;
struct crypto_sync_skcipher *initiator_enc;
struct crypto_sync_skcipher *acceptor_enc_aux;
struct crypto_sync_skcipher *initiator_enc_aux;
Auxiliary ciphers aren't mentioned in rfc396{1,2} so it appears to be
something peculiar to that implementation.
So acceptor_enc and acceptor_enc_aux, for instance, are both based on the same
key, and the implementation seems to pass the IV from one to the other. The
only difference is that the 'aux' cipher lacks the CTS wrapping - which only
makes a difference for the final two blocks[*] of the encryption (or
decryption) - and only if the data doesn't fully fill out the last block
(ie. it needs padding in some way so that the encryption algorithm can handle
it).
[*] Encryption cipher blocks, that is.
So I think it's purpose is twofold:
(1) It's a way to be a bit more efficient, cutting out the CTS layer's
indirection and additional buffering.
(2) crypto_skcipher_encrypt() assumes that it's doing the entire crypto
operation in one go and will always impose the final CTS bit, so you
can't call it repeatedly to progress through a buffer (as
xdr_process_buf() would like to do) as that would corrupt the data being
encrypted - unless you made sure that the data was always block-size
aligned (in which case, there's no point using CTS).
I wonder how much going through three layers of crypto modules costs. Looking
at how AES can be implemented using, say, Intel AES intructions, it looks like
AES+CBC should be easy to do in a single module. I wonder if we could have
optimised kerberos crypto that do the AES and the SHA together in a single
loop.
David
Powered by blists - more mailing lists