lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-Id: <71c31759a882e00f156a8434caed7064ec93d3da.1559208134.git.asml.silence@gmail.com>
Date:   Thu, 30 May 2019 12:27:08 +0300
From:   "Pavel Begunkov (Silence)" <asml.silence@...il.com>
To:     Jens Axboe <axboe@...nel.dk>, linux-block@...r.kernel.org,
        linux-kernel@...r.kernel.org
Cc:     osandov@...com, ming.lei@...hat.com, Hou Tao <houtao1@...wei.com>,
        Pavel Begunkov <asml.silence@...il.com>
Subject: [PATCH v2 1/1] blk-mq: Fix disabled hybrid polling

From: Pavel Begunkov <asml.silence@...il.com>

Commit 4bc6339a583cec650b05 ("block: move blk_stat_add() to
__blk_mq_end_request()") moved blk_stat_add() to reuse ktime_get_ns(),
so now it's called after blk_update_request(), which zeroes
rq->__data_len. Without length, blk_stat_add() can't calculate stat
bucket and returns error, effectively disabling hybrid polling.

v2: Hybrid polling needs pure io time for precision, but according to
the feedback from Omar Sandoval other components require end-to-end
time. So, it can't be reused, and should be sampled twice insted.

Signed-off-by: Pavel Begunkov <asml.silence@...il.com>
---
 block/blk-mq.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index 32b8ad3d341b..907799282d57 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -537,11 +537,6 @@ inline void __blk_mq_end_request(struct request *rq, blk_status_t error)
 	if (blk_mq_need_time_stamp(rq))
 		now = ktime_get_ns();
 
-	if (rq->rq_flags & RQF_STATS) {
-		blk_mq_poll_stats_start(rq->q);
-		blk_stat_add(rq, now);
-	}
-
 	if (rq->internal_tag != -1)
 		blk_mq_sched_completed_request(rq, now);
 
@@ -580,6 +575,11 @@ static void __blk_mq_complete_request(struct request *rq)
 	int cpu;
 
 	WRITE_ONCE(rq->state, MQ_RQ_COMPLETE);
+
+	if (rq->rq_flags & RQF_STATS) {
+		blk_mq_poll_stats_start(rq->q);
+		blk_stat_add(rq, ktime_get_ns());
+	}
 	/*
 	 * Most of single queue controllers, there is only one irq vector
 	 * for handling IO completion, and the only irq's affinity is set
-- 
2.21.0

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ