[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180417155724.531278623@linuxfoundation.org>
Date: Tue, 17 Apr 2018 17:58:53 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Tejun Heo <tj@...nel.org>,
Bart Van Assche <bart.vanassche@....com>,
Hannes Reinecke <hare@...e.com>,
Ming Lei <ming.lei@...hat.com>, Christoph Hellwig <hch@....de>,
Johannes Thumshirn <jthumshirn@...e.de>,
Oleksandr Natalenko <oleksandr@...alenko.name>,
Martin Steigerwald <martin@...htvoll.de>,
Jens Axboe <axboe@...nel.dk>
Subject: [PATCH 4.15 28/53] block: Change a rcu_read_{lock,unlock}_sched() pair into rcu_read_{lock,unlock}()
4.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Bart Van Assche <bart.vanassche@....com>
commit 818e0fa293ca836eba515615c64680ea916fd7cd upstream.
scsi_device_quiesce() uses synchronize_rcu() to guarantee that the
effect of blk_set_preempt_only() will be visible for percpu_ref_tryget()
calls that occur after the queue unfreeze by using the approach
explained in https://lwn.net/Articles/573497/. The rcu read lock and
unlock calls in blk_queue_enter() form a pair with the synchronize_rcu()
call in scsi_device_quiesce(). Both scsi_device_quiesce() and
blk_queue_enter() must either use regular RCU or RCU-sched.
Since neither the RCU-protected code in blk_queue_enter() nor
blk_queue_usage_counter_release() sleeps, regular RCU protection
is sufficient. Note: scsi_device_quiesce() does not have to be
modified since it already uses synchronize_rcu().
Reported-by: Tejun Heo <tj@...nel.org>
Fixes: 3a0a529971ec ("block, scsi: Make SCSI quiesce and resume work reliably")
Signed-off-by: Bart Van Assche <bart.vanassche@....com>
Acked-by: Tejun Heo <tj@...nel.org>
Cc: Tejun Heo <tj@...nel.org>
Cc: Hannes Reinecke <hare@...e.com>
Cc: Ming Lei <ming.lei@...hat.com>
Cc: Christoph Hellwig <hch@....de>
Cc: Johannes Thumshirn <jthumshirn@...e.de>
Cc: Oleksandr Natalenko <oleksandr@...alenko.name>
Cc: Martin Steigerwald <martin@...htvoll.de>
Cc: stable@...r.kernel.org # v4.15
Signed-off-by: Jens Axboe <axboe@...nel.dk>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
block/blk-core.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -823,7 +823,7 @@ int blk_queue_enter(struct request_queue
bool success = false;
int ret;
- rcu_read_lock_sched();
+ rcu_read_lock();
if (percpu_ref_tryget_live(&q->q_usage_counter)) {
/*
* The code that sets the PREEMPT_ONLY flag is
@@ -836,7 +836,7 @@ int blk_queue_enter(struct request_queue
percpu_ref_put(&q->q_usage_counter);
}
}
- rcu_read_unlock_sched();
+ rcu_read_unlock();
if (success)
return 0;
Powered by blists - more mailing lists