[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <lsq.1489146382.839532905@decadent.org.uk>
Date: Fri, 10 Mar 2017 11:46:22 +0000
From: Ben Hutchings <ben@...adent.org.uk>
To: linux-kernel@...r.kernel.org, stable@...r.kernel.org
CC: akpm@...ux-foundation.org,
"Bart Van Assche" <bart.vanassche@...disk.com>,
"Mike Snitzer" <snitzer@...hat.com>
Subject: [PATCH 3.16 033/370] dm rq: fix a race condition in rq_completed()
3.16.42-rc1 review patch. If anyone has any objections, please let me know.
------------------
From: Bart Van Assche <bart.vanassche@...disk.com>
commit d15bb3a6467e102e60d954aadda5fb19ce6fd8ec upstream.
It is required to hold the queue lock when calling blk_run_queue_async()
to avoid that a race between blk_run_queue_async() and
blk_cleanup_queue() is triggered.
Signed-off-by: Bart Van Assche <bart.vanassche@...disk.com>
Signed-off-by: Mike Snitzer <snitzer@...hat.com>
[bwh: Backported to 3.16: adjust filename]
Signed-off-by: Ben Hutchings <ben@...adent.org.uk>
---
drivers/md/dm.c | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -868,6 +868,9 @@ static void end_clone_bio(struct bio *cl
*/
static void rq_completed(struct mapped_device *md, int rw, int run_queue)
{
+ struct request_queue *q = md->queue;
+ unsigned long flags;
+
atomic_dec(&md->pending[rw]);
/* nudge anyone waiting on suspend queue */
@@ -880,8 +883,11 @@ static void rq_completed(struct mapped_d
* back into ->request_fn() could deadlock attempting to grab the
* queue lock again.
*/
- if (run_queue)
- blk_run_queue_async(md->queue);
+ if (run_queue) {
+ spin_lock_irqsave(q->queue_lock, flags);
+ blk_run_queue_async(q);
+ spin_unlock_irqrestore(q->queue_lock, flags);
+ }
/*
* dm_put() must be at the end of this function. See the comment above
Powered by blists - more mailing lists