[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180823075003.380471083@linuxfoundation.org>
Date: Thu, 23 Aug 2018 09:53:34 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Omar Sandoval <osandov@...com>,
Ming Lei <ming.lei@...hat.com>, Jens Axboe <axboe@...nel.dk>,
Sasha Levin <alexander.levin@...rosoft.com>
Subject: [PATCH 4.17 148/324] blk-mq: dont queue more if we get a busy return
4.17-stable review patch. If anyone has any objections, please let me know.
------------------
From: Jens Axboe <axboe@...nel.dk>
[ Upstream commit 1f57f8d442f8017587eeebd8617913bfc3661d3d ]
Some devices have different queue limits depending on the type of IO. A
classic case is SATA NCQ, where some commands can queue, but others
cannot. If we have NCQ commands inflight and encounter a non-queueable
command, the driver returns busy. Currently we attempt to dispatch more
from the scheduler, if we were able to queue some commands. But for the
case where we ended up stopping due to BUSY, we should not attempt to
retrieve more from the scheduler. If we do, we can get into a situation
where we attempt to queue a non-queueable command, get BUSY, then
successfully retrieve more commands from that scheduler and queue those.
This can repeat forever, starving the non-queuable command indefinitely.
Fix this by NOT attempting to pull more commands from the scheduler, if
we get a BUSY return. This should also be more optimal in terms of
letting requests stay in the scheduler for as long as possible, if we
get a BUSY due to the regular out-of-tags condition.
Reviewed-by: Omar Sandoval <osandov@...com>
Reviewed-by: Ming Lei <ming.lei@...hat.com>
Signed-off-by: Jens Axboe <axboe@...nel.dk>
Signed-off-by: Sasha Levin <alexander.levin@...rosoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
block/blk-mq.c | 12 ++++++++++++
1 file changed, 12 insertions(+)
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1174,6 +1174,9 @@ static bool blk_mq_mark_tag_wait(struct
#define BLK_MQ_RESOURCE_DELAY 3 /* ms units */
+/*
+ * Returns true if we did some work AND can potentially do more.
+ */
bool blk_mq_dispatch_rq_list(struct request_queue *q, struct list_head *list,
bool got_budget)
{
@@ -1304,8 +1307,17 @@ bool blk_mq_dispatch_rq_list(struct requ
blk_mq_run_hw_queue(hctx, true);
else if (needs_restart && (ret == BLK_STS_RESOURCE))
blk_mq_delay_run_hw_queue(hctx, BLK_MQ_RESOURCE_DELAY);
+
+ return false;
}
+ /*
+ * If the host/device is unable to accept more work, inform the
+ * caller of that.
+ */
+ if (ret == BLK_STS_RESOURCE || ret == BLK_STS_DEV_RESOURCE)
+ return false;
+
return (queued + errors) != 0;
}
Powered by blists - more mailing lists