[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <84cf04edf8f3f3b87b78383a1837aff3@suse.de>
Date: Mon, 15 Apr 2019 11:46:04 +0200
From: Roman Penyaev <rpenyaev@...e.de>
To: Bart Van Assche <bvanassche@....org>
Cc: Bob Liu <bob.liu@...cle.com>, linux-block@...r.kernel.org,
shirley.ma@...cle.com, martin.petersen@...cle.com,
Roman Pen <roman.penyaev@...fitbricks.com>,
Akinobu Mita <akinobu.mita@...il.com>,
Tejun Heo <tj@...nel.org>, Jens Axboe <axboe@...nel.dk>,
Christoph Hellwig <hch@....de>, linux-kernel@...r.kernel.org,
linux-block-owner@...r.kernel.org
Subject: Re: [RESEND PATCH] blk-mq: fix hang caused by freeze/unfreeze
sequence
On 2019-04-13 05:42, Bart Van Assche wrote:
> On 4/9/19 2:08 AM, Bob Liu wrote:
>> void blk_freeze_queue_start(struct request_queue *q)
>> {
>> - int freeze_depth;
>> -
>> - freeze_depth = atomic_inc_return(&q->mq_freeze_depth);
>> - if (freeze_depth == 1) {
>> + mutex_lock(&q->mq_freeze_lock);
>> + if (++q->mq_freeze_depth == 1) {
>> percpu_ref_kill(&q->q_usage_counter);
>> + mutex_unlock(&q->mq_freeze_lock);
>> if (queue_is_mq(q))
>> blk_mq_run_hw_queues(q, false);
>> + } else {
>> + mutex_unlock(&q->mq_freeze_lock);
>> }
>> }
> Have you considered to move the mutex_unlock() call to the end of the
> function
> such that there is only one mutex_unlock() call instead of two? In case
> you
> would be worried about holding the mutex around the code that runs the
> queue,
> how about changing the blk_mq_run_hw_queues() call such that the queues
> are
> run async?
Hi Bart,
The only purpose of 'mq_freeze_lock' is to avoid race between
mq_freeze_depth
variable and the following usage of q_usage_counter percpu ref. I admit
that
my original comment is quite unclear, but locked section should be as
short
as possible, so returning to your question: better to have two unlock
calls
instead of expanding locked critical section.
Unfortunately I do not have hardware to play again with the issue, but I
see
there is a nice candidate for a quick reproduction: null_blk queues
with
shared tags. Having several queues with shared tags and a script, which
powers on/off (I mean 'power' entry of configfs of the null_blk)
different
null devices from different cpus it is quite possible to trigger the
issue.
Random short msdelay() in correct places can help to increase
probability to
hit the issue quite fast.
But Bob, what is the backtrace of the issue you hit? What is the
device?
Conditions to reproduce the issue are quite specific and frankly I did
not
find any "naked" (without any locks) calls of blk_mq_freeze/unfreeze
sequence,
the only candidate which I found, seems, null_blk (not 100% sure, but
worth to
try).
--
Roman
Powered by blists - more mailing lists