[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200915033024.GB25789@gondor.apana.org.au>
Date: Tue, 15 Sep 2020 13:30:24 +1000
From: Herbert Xu <herbert@...dor.apana.org.au>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Thomas Gleixner <tglx@...utronix.de>,
Ard Biesheuvel <ardb@...nel.org>,
LKML <linux-kernel@...r.kernel.org>,
Linux Crypto Mailing List <linux-crypto@...r.kernel.org>
Subject: [PATCH] crypto: lib/chacha20poly1305 - Set SG_MITER_ATOMIC
unconditionally
I trimmed the cc as the mailing lists appear to be blocking this
email because of it.
On Mon, Sep 14, 2020 at 03:37:49PM -0700, Linus Torvalds wrote:
>
> So it _looks_ like this code started using kmap() - probably back when
> kmap_atomic() was so cumbersome to use - and was then converted
> (conditionally) to kmap_atomic() rather than just changed whole-sale.
> Is there actually something that wants to use those sg_miter functions
> and sleep?
I dug up the old zinc patch submissions and this wasn't present at
all in the original. The original zinc code used blkcipher_walk
which unconditinoally does kmap_atomic.
So it's only the SG miter conversion that introduced this change,
which appears to be a simple oversight (I think Ard was working on
latency issues at that time, perhaps he was worried about keeping
preemption off unnecessarily).
---8<---
There is no reason for the chacha20poly1305 SG miter code to use
kmap instead of kmap_atomic as the critical section doesn't sleep
anyway. So we can simply get rid of the preemptible check and
set SG_MITER_ATOMIC unconditionally.
Even if we need to reenable preemption to lower latency we should
be doing that by interrupting the SG miter walk rather than using
kmap.
Reported-by: Linus Torvalds <torvalds@...ux-foundation.org>
Signed-off-by: Herbert Xu <herbert@...dor.apana.org.au>
diff --git a/lib/crypto/chacha20poly1305.c b/lib/crypto/chacha20poly1305.c
index 431e04280332..5850f3b87359 100644
--- a/lib/crypto/chacha20poly1305.c
+++ b/lib/crypto/chacha20poly1305.c
@@ -251,9 +251,7 @@ bool chacha20poly1305_crypt_sg_inplace(struct scatterlist *src,
poly1305_update(&poly1305_state, pad0, 0x10 - (ad_len & 0xf));
}
- flags = SG_MITER_TO_SG;
- if (!preemptible())
- flags |= SG_MITER_ATOMIC;
+ flags = SG_MITER_TO_SG | SG_MITER_ATOMIC;
sg_miter_start(&miter, src, sg_nents(src), flags);
--
Email: Herbert Xu <herbert@...dor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
Powered by blists - more mailing lists