lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 15 May 2020 00:41:27 -0700
From:   Christoph Hellwig <>
To:     Jens Axboe <>
Cc:     Eric Biggers <>,
        Satya Tangirala <>,,,,,,,
        Barani Muthukumaran <>,
        Kuohong Wang <>,
        Kim Boojin <>
Subject: Re: [PATCH v13 00/12] Inline Encryption Support

On Thu, May 14, 2020 at 09:48:40AM -0600, Jens Axboe wrote:
> I have applied 1-5 for 5.8. Small tweak needed in patch 3 due to a header
> inclusion, but clean apart from that.

I looked at this a bit more as it clashed with my outstanding
q_usage_counter optimization, and I think we should move the
blk_crypto_bio_prep call into blk-mq, similar to what we do about
the integrity_prep call.  Comments?

>From b7a78be7de0f39ef972d6a2f97a3982a422bf3ab Mon Sep 17 00:00:00 2001
From: Christoph Hellwig <>
Date: Fri, 15 May 2020 09:32:40 +0200
Subject: block: move blk_crypto_bio_prep into blk_mq_make_request

Currently blk_crypto_bio_prep is called for every block driver, including
stacking drivers, which is probably not the right thing to do.  Instead
move it to blk_mq_make_request, similar to how we handle integrity data.
If we ever grow a low-level make_request based driver that wants
encryption it will have to call blk_crypto_bio_prep manually, but I really
hope we don't grow more non-stacking make_request drivers to start with.

This also means we only need to do the crypto preparation after splitting
and bouncing the bio, which means we don't bother allocating the fallback
context for a bio that might only be a dummy and gets split or bounced

Signed-off-by: Christoph Hellwig <>
 block/blk-core.c | 13 +++++--------
 block/blk-mq.c   |  2 ++
 2 files changed, 7 insertions(+), 8 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index 1e97f99735232..ac59afaa26960 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -1131,12 +1131,10 @@ blk_qc_t generic_make_request(struct bio *bio)
 			/* Create a fresh bio_list for all subordinate requests */
 			bio_list_on_stack[1] = bio_list_on_stack[0];
-			if (blk_crypto_bio_prep(&bio)) {
-				if (q->make_request_fn)
-					ret = q->make_request_fn(q, bio);
-				else
-					ret = blk_mq_make_request(q, bio);
-			}
+			if (q->make_request_fn)
+				ret = q->make_request_fn(q, bio);
+			else
+				ret = blk_mq_make_request(q, bio);
@@ -1185,8 +1183,7 @@ blk_qc_t direct_make_request(struct bio *bio)
 		return BLK_QC_T_NONE;
 	if (unlikely(bio_queue_enter(bio)))
 		return BLK_QC_T_NONE;
-	if (blk_crypto_bio_prep(&bio))
-		ret = blk_mq_make_request(q, bio);
+	ret = blk_mq_make_request(q, bio);
 	return ret;
diff --git a/block/blk-mq.c b/block/blk-mq.c
index d2962863e629f..0b5a0fa0d124b 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2033,6 +2033,8 @@ blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
 	blk_queue_bounce(q, &bio);
 	__blk_queue_split(q, &bio, &nr_segs);
+	if (!blk_crypto_bio_prep(&bio))
+		return BLK_QC_T_NONE;
 	if (!bio_integrity_prep(bio))
 		return BLK_QC_T_NONE;

Powered by blists - more mailing lists