[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAOtvUMfhethgZ8TarKGDumL1W5fgjwy1Hi7f88mmCCD6vJ_hLg@mail.gmail.com>
Date: Wed, 18 Jan 2017 17:21:47 +0200
From: Gilad Ben-Yossef <gilad@...yossef.com>
To: Binoy Jayan <binoy.jayan@...aro.org>
Cc: Oded <oded.golombek@....com>, Ofir <Ofir.Drang@....com>,
Herbert Xu <herbert@...dor.apana.org.au>,
"David S. Miller" <davem@...emloft.net>,
linux-crypto@...r.kernel.org, Mark Brown <broonie@...nel.org>,
Arnd Bergmann <arnd@...db.de>, linux-kernel@...r.kernel.org,
Alasdair Kergon <agk@...hat.com>,
Mike Snitzer <snitzer@...hat.com>, dm-devel@...hat.com,
Shaohua Li <shli@...nel.org>, linux-raid@...r.kernel.org,
Rajendra <rnayak@...eaurora.org>,
Milan Broz <gmazyland@...il.com>
Subject: Re: [RFC PATCH v3] crypto: Add IV generation algorithms
Hi Binoy,
On Wed, Jan 18, 2017 at 11:40 AM, Binoy Jayan <binoy.jayan@...aro.org> wrote:
> Currently, the iv generation algorithms are implemented in dm-crypt.c.
> The goal is to move these algorithms from the dm layer to the kernel
> crypto layer by implementing them as template ciphers so they can be
> implemented in hardware for performance. As part of this patchset, the
> iv-generation code is moved from the dm layer to the crypto layer and
> adapt the dm-layer to send a whole 'bio' (as defined in the block layer)
> at a time. Each bio contains an in memory representation of physically
> contiguous disk blocks. The dm layer sets up a chained scatterlist of
> these blocks split into physically contiguous segments in memory so that
> DMA can be performed. Also, the key management code is moved from dm layer
> to the cryto layer since the key selection for encrypting neighboring
> sectors depend on the keycount.
>
> Synchronous crypto requests to encrypt/decrypt a sector are processed
> sequentially. Asynchronous requests if processed in parallel, are freed
> in the async callback. The dm layer allocates space for iv. The hardware
> implementations can choose to make use of this space to generate their IVs
> sequentially or allocate it on their own.
> Interface to the crypto layer - include/crypto/geniv.h
>
> Signed-off-by: Binoy Jayan <binoy.jayan@...aro.org>
> ---
I have some review comments and a bug report -
<snip>
> */
> -static int crypt_convert(struct crypt_config *cc,
> - struct convert_context *ctx)
> +
> +static int crypt_convert_bio(struct crypt_config *cc,
> + struct convert_context *ctx)
> {
> + unsigned int cryptlen, n1, n2, nents, i = 0, bytes = 0;
> + struct skcipher_request *req;
> + struct dm_crypt_request *dmreq;
> + struct geniv_req_info rinfo;
> + struct bio_vec bv_in, bv_out;
> int r;
> + u8 *iv;
>
> atomic_set(&ctx->cc_pending, 1);
> + crypt_alloc_req(cc, ctx);
> +
> + req = ctx->req;
> + dmreq = dmreq_of_req(cc, req);
> + iv = iv_of_dmreq(cc, dmreq);
>
> - while (ctx->iter_in.bi_size && ctx->iter_out.bi_size) {
> + n1 = bio_segments(ctx->bio_in);
> + n2 = bio_segments(ctx->bio_in);
I'm pretty sure this needs to be
n2 = bio_segments(ctx->bio_out);
> + nents = n1 > n2 ? n1 : n2;
> + nents = nents > MAX_SG_LIST ? MAX_SG_LIST : nents;
> + cryptlen = ctx->iter_in.bi_size;
>
> - crypt_alloc_req(cc, ctx);
> + DMDEBUG("dm-crypt:%s: segments:[in=%u, out=%u] bi_size=%u\n",
> + bio_data_dir(ctx->bio_in) == WRITE ? "write" : "read",
> + n1, n2, cryptlen);
>
<Snip>
>
> - /* There was an error while processing the request. */
> - default:
> - atomic_dec(&ctx->cc_pending);
> - return r;
> - }
> + sg_set_page(&dmreq->sg_in[i], bv_in.bv_page, bv_in.bv_len,
> + bv_in.bv_offset);
> + sg_set_page(&dmreq->sg_out[i], bv_out.bv_page, bv_out.bv_len,
> + bv_out.bv_offset);
> +
> + bio_advance_iter(ctx->bio_in, &ctx->iter_in, bv_in.bv_len);
> + bio_advance_iter(ctx->bio_out, &ctx->iter_out, bv_out.bv_len);
> +
> + bytes += bv_in.bv_len;
> + i++;
> }
>
> - return 0;
> + DMDEBUG("dm-crypt: Processed %u of %u bytes\n", bytes, cryptlen);
> +
> + rinfo.is_write = bio_data_dir(ctx->bio_in) == WRITE;
Please consider wrapping the above boolean expression in parenthesis.
> + rinfo.iv_sector = ctx->cc_sector;
> + rinfo.nents = nents;
> + rinfo.iv = iv;
> +
> + skcipher_request_set_crypt(req, dmreq->sg_in, dmreq->sg_out,
Also, where do the scatterlist src2 and dst2 that you use
sg_set_page() get sg_init_table() called on?
I couldn't figure it out...
Last but not least, when performing the following sequence on Arm64
(on latest Qemu Virt platform) -
1. cryptsetup luksFormat fs3.img
2. cryptsetup open --type luks fs3.img croot
3. mke2fs /dev/mapper/croot
[ fs3.img is a 16MB file for loopback ]
The attached kernel panic happens. The same does not occur without the patch.
Let me know if you need any additional information to recreate it.
I've tried to debug it a little but did not came up with anything
useful aside from above review notes.
Thanks!
--
Gilad Ben-Yossef
Chief Coffee Drinker
"If you take a class in large-scale robotics, can you end up in a
situation where the homework eats your dog?"
-- Jean-Baptiste Queru
View attachment "panic.txt" of type "text/plain" (8333 bytes)
Powered by blists - more mailing lists