lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 22 Dec 2016 16:59:27 +0800
From:   Herbert Xu <herbert@...dor.apana.org.au>
To:     Binoy Jayan <binoy.jayan@...aro.org>
Cc:     Milan Broz <gmazyland@...il.com>, Oded <oded.golombek@....com>,
        Ofir <Ofir.Drang@....com>, Arnd Bergmann <arnd@...db.de>,
        Mark Brown <broonie@...nel.org>,
        Alasdair Kergon <agk@...hat.com>,
        "David S. Miller" <davem@...emloft.net>, private-kwg@...aro.org,
        dm-devel@...hat.com, linux-crypto@...r.kernel.org,
        Rajendra <rnayak@...eaurora.org>,
        Linux kernel mailing list <linux-kernel@...r.kernel.org>,
        linux-raid@...r.kernel.org, Shaohua Li <shli@...nel.org>,
        Mike Snitzer <snitzer@...hat.com>
Subject: Re: dm-crypt optimization

On Thu, Dec 22, 2016 at 01:55:59PM +0530, Binoy Jayan wrote:
>
> > Support of bigger block sizes would be unsafe without additional mechanism that provides
> > atomic writes of multiple sectors. Maybe it applies to 4k as well on some devices though...)
> 
> Did you mean write to the crypto output buffers or the actual disk write?
> I didn't quite understand how the block size for encryption affects atomic
> writes as it is the block layer which handles them. As far as dm-crypt is,
> concerned it just encrypts/decrypts a 'struct bio' instance and submits the IO
> operation to the block layer.

I think Milan's talking about increasing the real block size, which
would obviously require the hardware to be able to write that out
atomically, as otherwise it breaks the crypto.

But if we can instead do the IV generation within the crypto API,
then the block size won't be an issue at all.  Because you can
supply as many blocks as you want and they would be processed
block-by-block.

Now there is a disadvantage to this approach, and that is you
have to wait for the whole thing to be encrypted before you can 
start doing the IO.  I'm not sure how big a problem that is but
if it is bad enough to affect performance, we can look into adding
some form of partial completion to the crypto API.

Cheers,
-- 
Email: Herbert Xu <herbert@...dor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ