[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200515122540.GA143740@google.com>
Date: Fri, 15 May 2020 12:25:40 +0000
From: Satya Tangirala <satyat@...gle.com>
To: Christoph Hellwig <hch@...radead.org>
Cc: Jens Axboe <axboe@...nel.dk>, Eric Biggers <ebiggers@...nel.org>,
linux-block@...r.kernel.org, linux-scsi@...r.kernel.org,
linux-fscrypt@...r.kernel.org, linux-fsdevel@...r.kernel.org,
linux-f2fs-devel@...ts.sourceforge.net, linux-ext4@...r.kernel.org,
Barani Muthukumaran <bmuthuku@....qualcomm.com>,
Kuohong Wang <kuohong.wang@...iatek.com>,
Kim Boojin <boojin.kim@...sung.com>
Subject: Re: [PATCH v13 00/12] Inline Encryption Support
On Fri, May 15, 2020 at 12:41:27AM -0700, Christoph Hellwig wrote:
> On Thu, May 14, 2020 at 09:48:40AM -0600, Jens Axboe wrote:
> > I have applied 1-5 for 5.8. Small tweak needed in patch 3 due to a header
> > inclusion, but clean apart from that.
>
> I looked at this a bit more as it clashed with my outstanding
> q_usage_counter optimization, and I think we should move the
> blk_crypto_bio_prep call into blk-mq, similar to what we do about
> the integrity_prep call. Comments?
>
> ---
> From b7a78be7de0f39ef972d6a2f97a3982a422bf3ab Mon Sep 17 00:00:00 2001
> From: Christoph Hellwig <hch@....de>
> Date: Fri, 15 May 2020 09:32:40 +0200
> Subject: block: move blk_crypto_bio_prep into blk_mq_make_request
>
> Currently blk_crypto_bio_prep is called for every block driver, including
> stacking drivers, which is probably not the right thing to do. Instead
> move it to blk_mq_make_request, similar to how we handle integrity data.
> If we ever grow a low-level make_request based driver that wants
> encryption it will have to call blk_crypto_bio_prep manually, but I really
> hope we don't grow more non-stacking make_request drivers to start with.
One of the nice things about the current design is that regardless of what
request queue an FS sends an encrypted bio to, blk-crypto will be able to handle
the encryption (whether by using hardware inline encryption, or using the
blk-crypto-fallback). The FS itself does not need to worry about what the
request queue is.
But if we move blk_crypto_bio_prep into blk_mq_make_request, the FS loses this
ability to not care about the underlying request queue - it can no longer send a
bio with an encryption context to queue such that q->make_request_fn !=
blk_mq_make_request_fn. To restore that ability, we'll need to add calls to
blk_crypto_bio_prep to every possible make_request_fn (although yes, if we do
decide to add the call to blk_crypto_bio_prep in multiple places, I think it'll
be fine to only add it to the non-stacking make_request_fns).
Also, I tried to look through the patch with the q_usage_counter optimization -
is it this one?
[PATCH 4/4] block: allow blk_mq_make_request to consume the q_usage_counter reference
>
> This also means we only need to do the crypto preparation after splitting
> and bouncing the bio, which means we don't bother allocating the fallback
> context for a bio that might only be a dummy and gets split or bounced
> later.
>
> Signed-off-by: Christoph Hellwig <hch@....de>
> ---
> block/blk-core.c | 13 +++++--------
> block/blk-mq.c | 2 ++
> 2 files changed, 7 insertions(+), 8 deletions(-)
>
> diff --git a/block/blk-core.c b/block/blk-core.c
> index 1e97f99735232..ac59afaa26960 100644
> --- a/block/blk-core.c
> +++ b/block/blk-core.c
> @@ -1131,12 +1131,10 @@ blk_qc_t generic_make_request(struct bio *bio)
> /* Create a fresh bio_list for all subordinate requests */
> bio_list_on_stack[1] = bio_list_on_stack[0];
> bio_list_init(&bio_list_on_stack[0]);
> - if (blk_crypto_bio_prep(&bio)) {
> - if (q->make_request_fn)
> - ret = q->make_request_fn(q, bio);
> - else
> - ret = blk_mq_make_request(q, bio);
> - }
> + if (q->make_request_fn)
> + ret = q->make_request_fn(q, bio);
> + else
> + ret = blk_mq_make_request(q, bio);
>
> blk_queue_exit(q);
>
> @@ -1185,8 +1183,7 @@ blk_qc_t direct_make_request(struct bio *bio)
> return BLK_QC_T_NONE;
> if (unlikely(bio_queue_enter(bio)))
> return BLK_QC_T_NONE;
> - if (blk_crypto_bio_prep(&bio))
> - ret = blk_mq_make_request(q, bio);
> + ret = blk_mq_make_request(q, bio);
> blk_queue_exit(q);
> return ret;
> }
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index d2962863e629f..0b5a0fa0d124b 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -2033,6 +2033,8 @@ blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
> blk_queue_bounce(q, &bio);
> __blk_queue_split(q, &bio, &nr_segs);
>
> + if (!blk_crypto_bio_prep(&bio))
> + return BLK_QC_T_NONE;
> if (!bio_integrity_prep(bio))
> return BLK_QC_T_NONE;
>
> --
> 2.26.2
>
Powered by blists - more mailing lists