[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200513180527.GE1243@sol.localdomain>
Date: Wed, 13 May 2020 11:05:27 -0700
From: Eric Biggers <ebiggers@...nel.org>
To: Satya Tangirala <satyat@...gle.com>
Cc: linux-block@...r.kernel.org, linux-scsi@...r.kernel.org,
linux-fscrypt@...r.kernel.org, linux-fsdevel@...r.kernel.org,
linux-f2fs-devel@...ts.sourceforge.net, linux-ext4@...r.kernel.org,
Barani Muthukumaran <bmuthuku@....qualcomm.com>,
Kuohong Wang <kuohong.wang@...iatek.com>,
Kim Boojin <boojin.kim@...sung.com>
Subject: Re: [PATCH v12 05/12] block: blk-crypto-fallback for Inline
Encryption
On Thu, Apr 30, 2020 at 11:59:52AM +0000, Satya Tangirala wrote:
> Blk-crypto delegates crypto operations to inline encryption hardware when
> available. The separately configurable blk-crypto-fallback contains a
> software fallback to the kernel crypto API - when enabled, blk-crypto
> will use this fallback for en/decryption when inline encryption hardware is
> not available. This lets upper layers not have to worry about whether or
> not the underlying device has support for inline encryption before
> deciding to specify an encryption context for a bio. It also allows for
> testing without actual inline encryption hardware - in particular, it
> makes it possible to test the inline encryption code in ext4 and f2fs
> simply by running xfstests with the inlinecrypt mount option, which in
> turn allows for things like the regular upstream regression testing of
> ext4 to cover the inline encryption code paths. For more details, refer
> to Documentation/block/inline-encryption.rst.
>
> Signed-off-by: Satya Tangirala <satyat@...gle.com>
Generally looks good, you can add:
Reviewed-by: Eric Biggers <ebiggers@...gle.com>
A few comments below for when you resend. Also, can you split the paragraph
above into multiple? E.g.
Blk-crypto delegates...
This lets upper layers...
For more details, refer to...
> +static int blk_crypto_keyslot_program(struct blk_keyslot_manager *ksm,
> + const struct blk_crypto_key *key,
> + unsigned int slot)
> +{
> + struct blk_crypto_keyslot *slotp = &blk_crypto_keyslots[slot];
> + const enum blk_crypto_mode_num crypto_mode =
> + key->crypto_cfg.crypto_mode;
> + int err;
> +
> + if (crypto_mode != slotp->crypto_mode &&
> + slotp->crypto_mode != BLK_ENCRYPTION_MODE_INVALID)
> + blk_crypto_evict_keyslot(slot);
> +
> + slotp->crypto_mode = crypto_mode;
> + err = crypto_skcipher_setkey(slotp->tfms[crypto_mode], key->raw,
> + key->size);
> + if (err) {
> + blk_crypto_evict_keyslot(slot);
> + return -EIO;
> + }
> + return 0;
> +}
Shouldn't this just return 'err'? Is there a good reason for EIO?
> +static bool blk_crypto_alloc_cipher_req(struct bio *src_bio,
> + struct blk_ksm_keyslot *slot,
> + struct skcipher_request **ciph_req_ret,
> + struct crypto_wait *wait)
> +{
> + struct skcipher_request *ciph_req;
> + const struct blk_crypto_keyslot *slotp;
> + int keyslot_idx = blk_ksm_get_slot_idx(slot);
> +
> + slotp = &blk_crypto_keyslots[keyslot_idx];
> + ciph_req = skcipher_request_alloc(slotp->tfms[slotp->crypto_mode],
> + GFP_NOIO);
> + if (!ciph_req) {
> + src_bio->bi_status = BLK_STS_RESOURCE;
> + return false;
> + }
> +
> + skcipher_request_set_callback(ciph_req,
> + CRYPTO_TFM_REQ_MAY_BACKLOG |
> + CRYPTO_TFM_REQ_MAY_SLEEP,
> + crypto_req_done, wait);
> + *ciph_req_ret = ciph_req;
> +
> + return true;
> +}
I think it would be better to remove the 'src_bio' argument from here and make
the two callers set BLK_STS_RESOURCE instead. See e.g.
bio_crypt_check_alignment() which uses a similar convention.
> +/**
> + * blk_crypto_fallback_decrypt_endio - clean up bio w.r.t fallback decryption
> + *
> + * @bio: the bio to clean up.
> + *
> + * Restore bi_private and bi_end_io, and queue the bio for decryption into a
> + * workqueue, since this function will be called from an atomic context.
> + */
"clean up bio w.r.t fallback decryption" is misleading, since the main point of
this function is to queue the bio for decryption. How about:
/**
* blk_crypto_fallback_decrypt_endio - queue bio for fallback decryption
*
* @bio: the bio to queue
*
* Restore bi_private and bi_end_io, and queue the bio for decryption into a
* workqueue, since this function will be called from an atomic context.
*/
> +bool blk_crypto_fallback_bio_prep(struct bio **bio_ptr)
> +{
> + struct bio *bio = *bio_ptr;
> + struct bio_crypt_ctx *bc = bio->bi_crypt_context;
> + struct bio_fallback_crypt_ctx *f_ctx;
> +
> + if (!tfms_inited[bc->bc_key->crypto_cfg.crypto_mode]) {
> + bio->bi_status = BLK_STS_IOERR;
> + return false;
> + }
This can only happen if the user forgot to call blk_crypto_start_using_key().
And if someone does that, it might be hard for them to understand why they're
getting IOERR. A WARN_ON_ONCE() and a comment would help:
if (WARN_ON_ONCE(!tfms_inited[bc->bc_key->crypto_cfg.crypto_mode])) {
/* User didn't call blk_crypto_start_using_key() first */
bio->bi_status = BLK_STS_IOERR;
return false;
}
This would be similar to how __blk_crypto_bio_prep() does
WARN_ON_ONCE(!bio_has_data(bio)) to catch another type of usage error.
> +/*
> + * Prepare blk-crypto-fallback for the specified crypto mode.
> + * Returns -ENOPKG if the needed crypto API support is missing.
> + */
> +int blk_crypto_fallback_start_using_mode(enum blk_crypto_mode_num mode_num)
> +{
> + const char *cipher_str = blk_crypto_modes[mode_num].cipher_str;
> + struct blk_crypto_keyslot *slotp;
> + unsigned int i;
> + int err = 0;
> +
> + /*
> + * Fast path
> + * Ensure that updates to blk_crypto_keyslots[i].tfms[mode_num]
> + * for each i are visible before we try to access them.
> + */
> + if (likely(smp_load_acquire(&tfms_inited[mode_num])))
> + return 0;
> +
> + mutex_lock(&tfms_init_lock);
> + err = blk_crypto_fallback_init();
> + if (err)
> + goto out;
> +
> + if (tfms_inited[mode_num])
> + goto out;
It would make more sense to check tfms_inited[mode_num] immediately after
acquiring the mutex, given that it's checked before.
- Eric
Powered by blists - more mailing lists