lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 23 Dec 2022 20:52:16 +0800
From:   Kemeng Shi <shikemeng@...weicloud.com>
To:     axboe@...nel.dk, dwagner@...e.de, hare@...e.de,
        ming.lei@...hat.com, linux-block@...r.kernel.org,
        linux-kernel@...r.kernel.org
Cc:     hch@....de, john.garry@...wei.com, shikemeng@...weicloud.com
Subject: [PATCH 06/13] blk-mq: remove unncessary error count and flush in blk_mq_plug_issue_direct

blk_mq_plug_issue_direct try to send a list of requests which belong to
different hctxs. Normally, we will send flush when hctx changes as there
maybe no more request for the same hctx. Besides we will send flush along
with last request in the list by set last parameter of
blk_mq_request_issue_directly.

Extra flush is needed for two cases:
1. We stop sending at middle of list, then normal flush sent after last
request of current hctx is miss.
2. Error happens at sending last request and normal flush may be lost.

In blk_mq_plug_issue_direct, we only break the list walk if we get
BLK_STS_RESOURCE or BLK_STS_DEV_RESOURCE error. We will send extra flush
for this case already.
We count error number and send extra flush if error number is non-zero
after sending all requests in list. This could cover case 2 described
above, but there are two things to improve:
1. If last request is sent successfully, error of request at middle of list
will trigger an unnecessary flush.
2. We only need error of last request instead of error number and error of
last request can be simply retrieved from ret.

Cover case 2 above by simply check ret of last request and remove
unnecessary error count and flush to improve blk_mq_plug_issue_direct.

Signed-off-by: Kemeng Shi <shikemeng@...weicloud.com>
---
 block/blk-mq.c | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index a447a7586032..01f48a73eacd 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2686,11 +2686,10 @@ static void blk_mq_plug_issue_direct(struct blk_plug *plug, bool from_schedule)
 	struct blk_mq_hw_ctx *hctx = NULL;
 	struct request *rq;
 	int queued = 0;
-	int errors = 0;
+	blk_status_t ret;
 
 	while ((rq = rq_list_pop(&plug->mq_list))) {
 		bool last = rq_list_empty(plug->mq_list);
-		blk_status_t ret;
 
 		if (hctx != rq->mq_hctx) {
 			if (hctx)
@@ -2710,7 +2709,6 @@ static void blk_mq_plug_issue_direct(struct blk_plug *plug, bool from_schedule)
 			return;
 		default:
 			blk_mq_end_request(rq, ret);
-			errors++;
 			break;
 		}
 	}
@@ -2719,7 +2717,7 @@ static void blk_mq_plug_issue_direct(struct blk_plug *plug, bool from_schedule)
 	 * If we didn't flush the entire list, we could have told the driver
 	 * there was more coming, but that turned out to be a lie.
 	 */
-	if (errors)
+	if (ret != BLK_STS_OK)
 		blk_mq_commit_rqs(hctx, &queued, from_schedule);
 }
 
-- 
2.30.0

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ