lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220614071410.3571204-1-yukuai3@huawei.com>
Date:   Tue, 14 Jun 2022 15:14:10 +0800
From:   Yu Kuai <yukuai3@...wei.com>
To:     <axboe@...nel.dk>, <ming.lei@...hat.com>, <djeffery@...hat.com>,
        <bvanassche@....org>
CC:     <linux-block@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
        <yukuai3@...wei.com>, <yi.zhang@...wei.com>
Subject: [PATCH -next] blk-mq: fix boot time regression for scsi drives with multiple hctx

We found that boot time is increased for about 8s after upgrading kernel
from v4.19 to v5.10(megaraid-sas is used in the environment).

Following is where the extra time is spent:

scsi_probe_and_add_lun
 __scsi_remove_device
  blk_cleanup_queue
   blk_mq_exit_queue
    blk_mq_exit_hw_queues
     blk_mq_exit_hctx
      blk_mq_clear_flush_rq_mapping -> function latency is 0.1ms
       cmpxchg

There are three reasons:
1) megaraid-sas is using multiple hctxs in v5.10, thus blk_mq_exit_hctx()
will be called much more times in v5.10 compared to v4.19.
2) scsi will scan for each target thus __scsi_remove_device() will be
called for many times.
3) blk_mq_clear_flush_rq_mapping() is introduced after v4.19, it will
call cmpxchg() for each request, and function latency is abount 0.1ms.

Since that blk_mq_clear_flush_rq_mapping() will only be called while the
queue is freezed already, which means there is no inflight request,
it's safe to set NULL for 'tags->rqs[]' directly instead of using
cmpxchg(). Tests show that with this change, function latency of
blk_mq_clear_flush_rq_mapping() is about 1us, and boot time is not
increased.

Fixes: 364b61818f65 ("blk-mq: clearing flush request reference in tags->rqs[]")
Signed-off-by: Yu Kuai <yukuai3@...wei.com>
---
 block/blk-mq-tag.c | 2 +-
 block/blk-mq.c     | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
index 2dcd738c6952..d002eefcdaf5 100644
--- a/block/blk-mq-tag.c
+++ b/block/blk-mq-tag.c
@@ -253,7 +253,7 @@ static struct request *blk_mq_find_and_get_req(struct blk_mq_tags *tags,
 	unsigned long flags;
 
 	spin_lock_irqsave(&tags->lock, flags);
-	rq = tags->rqs[bitnr];
+	rq = READ_ONCE(tags->rqs[bitnr]);
 	if (!rq || rq->tag != bitnr || !req_ref_inc_not_zero(rq))
 		rq = NULL;
 	spin_unlock_irqrestore(&tags->lock, flags);
diff --git a/block/blk-mq.c b/block/blk-mq.c
index e9bf950983c7..21ace698d3be 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -3421,7 +3421,7 @@ static void blk_mq_clear_flush_rq_mapping(struct blk_mq_tags *tags,
 	WARN_ON_ONCE(req_ref_read(flush_rq) != 0);
 
 	for (i = 0; i < queue_depth; i++)
-		cmpxchg(&tags->rqs[i], flush_rq, NULL);
+		WRITE_ONCE(tags->rqs[i], NULL);
 
 	/*
 	 * Wait until all pending iteration is done.
-- 
2.31.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ