[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1449681794.22260.58.camel@schen9-desk2.jf.intel.com>
Date: Wed, 09 Dec 2015 09:23:14 -0800
From: Tim Chen <tim.c.chen@...ux.intel.com>
To: Herbert Xu <herbert@...dor.apana.org.au>
Cc: "H. Peter Anvin" <hpa@...or.com>,
"David S.Miller" <davem@...emloft.net>,
Stephan Mueller <smueller@...onox.de>,
Chandramouli Narayanan <mouli_7982@...oo.com>,
Vinodh Gopal <vinodh.gopal@...el.com>,
James Guilford <james.guilford@...el.com>,
Wajdi Feghali <wajdi.k.feghali@...el.com>,
Jussi Kivilinna <jussi.kivilinna@....fi>,
linux-crypto@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v4 5/5] crypto: AES CBC multi-buffer glue code
On Wed, 2015-12-09 at 10:52 +0800, Herbert Xu wrote:
> On Wed, Dec 02, 2015 at 12:02:45PM -0800, Tim Chen wrote:
> >
> > +/*
> > + * CRYPTO_ALG_ASYNC flag is passed to indicate we have an ablk
> > + * scatter-gather walk.
> > + */
> > +
> > +static struct crypto_alg aes_cbc_mb_alg = {
> > + .cra_name = "__cbc-aes-aesni-mb",
> > + .cra_driver_name = "__driver-cbc-aes-aesni-mb",
> > + .cra_priority = 100,
> > + .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER | CRYPTO_ALG_ASYNC
> > + | CRYPTO_ALG_INTERNAL,
> > + .cra_blocksize = AES_BLOCK_SIZE,
> > + .cra_ctxsize = sizeof(struct crypto_aes_ctx) +
> > + AESNI_ALIGN - 1,
> > + .cra_alignmask = 0,
> > + .cra_type = &crypto_blkcipher_type,
> > + .cra_module = THIS_MODULE,
> > + .cra_list = LIST_HEAD_INIT(aes_cbc_mb_alg.cra_list),
> > + .cra_u = {
> > + .blkcipher = {
> > + .min_keysize = AES_MIN_KEY_SIZE,
> > + .max_keysize = AES_MAX_KEY_SIZE,
> > + .ivsize = AES_BLOCK_SIZE,
> > + .setkey = aes_set_key,
> > + .encrypt = mb_aes_cbc_encrypt,
> > + .decrypt = mb_aes_cbc_decrypt
> > + },
> > + },
> > +};
>
> So why do we still need this? Shouldn't a single ablkcipher cover
> all the cases?
>
> Thanks,
This is an internal algorithm. We are indeed casting the request
to the outer ablkcipher request when we do the async cipher walk.
See
static int mb_aes_cbc_decrypt(struct blkcipher_desc *desc,
struct scatterlist *dst, struct scatterlist *src,
unsigned int nbytes)
{
struct crypto_aes_ctx *aesni_ctx;
struct mcryptd_blkcipher_request_ctx *rctx =
container_of(desc, struct mcryptd_blkcipher_request_ctx, desc);
struct ablkcipher_request *req;
bool is_mcryptd_req;
unsigned long src_paddr;
unsigned long dst_paddr;
int err;
/* note here whether it is mcryptd req */
is_mcryptd_req = desc->flags & CRYPTO_TFM_REQ_MAY_SLEEP;
req = cast_mcryptd_ctx_to_req(rctx);
aesni_ctx = aes_ctx(crypto_blkcipher_ctx(desc->tfm));
ablkcipher_walk_init(&rctx->walk, dst, src, nbytes);
err = ablkcipher_walk_phys(req, &rctx->walk);
if (err || !rctx->walk.nbytes)
goto done1;
desc->flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP;
kernel_fpu_begin();
while ((nbytes = rctx->walk.nbytes)) {
src_paddr = (page_to_phys(rctx->walk.src.page) + rctx->walk.src.offset);
dst_paddr = (page_to_phys(rctx->walk.dst.page) + rctx->walk.dst.offset);
aesni_cbc_dec(aesni_ctx, phys_to_virt(dst_paddr), phys_to_virt(src_paddr),
rctx->walk.nbytes & AES_BLOCK_MASK, rctx->walk.iv);
nbytes &= AES_BLOCK_SIZE - 1;
err = ablkcipher_walk_done(req, &rctx->walk, nbytes);
if (err)
goto done2;
}
done2:
kernel_fpu_end();
done1:
ablkcipher_walk_complete(&rctx->walk);
Thanks.
Tim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists