[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1448966899-3399-3-git-send-email-paolo.valente@unimore.it>
Date: Tue, 1 Dec 2015 11:48:18 +0100
From: Paolo Valente <paolo.valente@...more.it>
To: Jens Axboe <axboe@...com>,
Matias Bjørling <m@...rling.me>,
Arianna Avanzini <avanzini@...gle.com>
Cc: Paolo Valente <paolo.valente@...more.it>,
Akinobu Mita <akinobu.mita@...il.com>,
"Luis R. Rodriguez" <mcgrof@...e.com>,
Ming Lei <ming.lei@...onical.com>,
Mike Krinkin <krinkin.m.u@...il.com>,
linux-kernel@...r.kernel.org
Subject: [PATCH BUGFIX V2 2/3] null_blk: guarantee device restart in all irq modes
From: Arianna Avanzini <avanzini@...gle.com>
In single-queue (block layer) mode,the function null_rq_prep_fn stops
the device if alloc_cmd fails. Then, once stopped, the device must be
restarted on the next command completion, so that the request(s) for
which alloc_cmd failed can be requeued. Otherwise the device hangs.
Unfortunately, device restart is currently performed only for delayed
completions, i.e., in irqmode==2. This fact causes hangs, for the
above reasons, with the other irqmodes in combination with single-queue
block layer.
This commits addresses this issue by making sure that, if stopped, the
device is properly restarted for all irqmodes on completions.
Signed-off-by: Paolo Valente <paolo.valente@...more.it>
Signed-off-by: Arianna AVanzini <avanzini@...gle.com>
---
Changes V1->V2
- reinstated mq_ops check
drivers/block/null_blk.c | 27 +++++++++++++++------------
1 file changed, 15 insertions(+), 12 deletions(-)
diff --git a/drivers/block/null_blk.c b/drivers/block/null_blk.c
index 08932f5..cf65619 100644
--- a/drivers/block/null_blk.c
+++ b/drivers/block/null_blk.c
@@ -217,6 +217,8 @@ static struct nullb_cmd *alloc_cmd(struct nullb_queue *nq, int can_wait)
static void end_cmd(struct nullb_cmd *cmd)
{
+ struct request_queue *q = NULL;
+
switch (queue_mode) {
case NULL_Q_MQ:
blk_mq_end_request(cmd->rq, 0);
@@ -227,27 +229,28 @@ static void end_cmd(struct nullb_cmd *cmd)
break;
case NULL_Q_BIO:
bio_endio(cmd->bio);
- break;
+ goto free_cmd;
}
- free_cmd(cmd);
-}
-
-static enum hrtimer_restart null_cmd_timer_expired(struct hrtimer *timer)
-{
- struct nullb_cmd *cmd = container_of(timer, struct nullb_cmd, timer);
- struct request_queue *q = NULL;
-
if (cmd->rq)
q = cmd->rq->q;
+ /* Restart queue if needed, as we are freeing a tag */
if (q && !q->mq_ops && blk_queue_stopped(q)) {
- spin_lock(q->queue_lock);
+ unsigned long flags;
+
+ spin_lock_irqsave(q->queue_lock, flags);
if (blk_queue_stopped(q))
blk_start_queue(q);
- spin_unlock(q->queue_lock);
+ spin_unlock_irqrestore(q->queue_lock, flags);
}
- end_cmd(cmd);
+free_cmd:
+ free_cmd(cmd);
+}
+
+static enum hrtimer_restart null_cmd_timer_expired(struct hrtimer *timer)
+{
+ end_cmd(container_of(timer, struct nullb_cmd, timer));
return HRTIMER_NORESTART;
}
--
1.9.1
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists