[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y+H3eyrHFvv0tl50@kadam>
Date: Tue, 7 Feb 2023 10:02:19 +0300
From: Dan Carpenter <error27@...il.com>
To: Luis Chamberlain <mcgrof@...nel.org>
Cc: Christoph Hellwig <hch@...radead.org>,
Jens Axboe <axboe@...nel.dk>,
Julia Lawall <julia.lawall@...ia.fr>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Hongchen Zhang <zhanghongchen@...ngson.cn>,
Alexander Viro <viro@...iv.linux.org.uk>,
Andrew Morton <akpm@...ux-foundation.org>,
"Christian Brauner (Microsoft)" <brauner@...nel.org>,
David Howells <dhowells@...hat.com>,
Mauro Carvalho Chehab <mchehab@...nel.org>,
Eric Dumazet <edumazet@...gle.com>,
"Fabio M. De Francesco" <fmdefrancesco@...il.com>,
Christophe JAILLET <christophe.jaillet@...adoo.fr>,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
maobibo <maobibo@...ngson.cn>,
Matthew Wilcox <willy@...radead.org>,
Sedat Dilek <sedat.dilek@...il.com>
Subject: Re: [PATCH v4] pipe: use __pipe_{lock,unlock} instead of spinlock
On Mon, Feb 06, 2023 at 09:54:47AM -0800, Luis Chamberlain wrote:
> > block/blk-mq.c:4083 blk_mq_destroy_queue() warn: sleeping in atomic context
>
> Let's see as an example.
>
> blk_mq_exit_hctx() can spin_lock() and so could disable preemption but I
> can't see why this is sleeping in atomic context.
>
I should have said, the lines are from linux-next.
block/blk-mq.c
4078 void blk_mq_destroy_queue(struct request_queue *q)
4079 {
4080 WARN_ON_ONCE(!queue_is_mq(q));
4081 WARN_ON_ONCE(blk_queue_registered(q));
4082
4083 might_sleep();
^^^^^^^^^^^^^^
This is a weird example because today's cross function DB doesn't say
which function disables preemption. The output from `smdb.py preempt
blk_mq_destroy_queue` says:
nvme_remove_admin_tag_set()
nvme_remove_io_tag_set()
-> blk_mq_destroy_queue()
I would have assumed that nothing is disabling preempt and the
information just hasn't propagated through the call tree yet. However
yesterday's DB has enough information to show why the warning is
generated.
nvme_fc_match_disconn_ls() takes spin_lock_irqsave(&rport->lock, flags);
-> nvme_fc_ctrl_put(ctrl);
-> kref_put(&ctrl->ref, nvme_fc_ctrl_free);
-> nvme_remove_admin_tag_set(&ctrl->ctrl);
-> blk_mq_destroy_queue(ctrl->admin_q);
-> blk_mq_destroy_queue() <-- sleeps
It's the link between kref_put() and nvme_fc_ctrl_free() where the data
gets lost in today's DB. kref_put() is tricky to handle. I'm just
puzzled why it worked yesterday.
regards,
dan carpenter
Powered by blists - more mailing lists