[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1550246888.31902.97.camel@acm.org>
Date: Fri, 15 Feb 2019 08:08:08 -0800
From: Bart Van Assche <bvanassche@....org>
To: Ming Lei <ming.lei@...hat.com>
Cc: peterz@...radead.org, mingo@...hat.com, will.deacon@....com,
tj@...nel.org, longman@...hat.com, johannes.berg@...el.com,
linux-kernel@...r.kernel.org, Jens Axboe <axboe@...nel.dk>,
Theodore Ts'o <tytso@....edu>
Subject: Re: [PATCH v7 21/23] block: Avoid that flushing triggers a lockdep
complaint
On Fri, 2019-02-15 at 10:26 +0800, Ming Lei wrote:
> There might be lots of blk_flush_queue instance which is allocated
> for each hctx, then lots of class key slot may be wasted.
>
> So I suggest to use one nvmet_loop_flush_lock_key for this particular issue,
> something like the following patch:
>
> diff --git a/drivers/nvme/target/loop.c b/drivers/nvme/target/loop.c
> index 4aac1b4a8112..ec4248c12ed9 100644
> --- a/drivers/nvme/target/loop.c
> +++ b/drivers/nvme/target/loop.c
> @@ -524,7 +524,9 @@ static const struct nvme_ctrl_ops nvme_loop_ctrl_ops = {
>
> static int nvme_loop_create_io_queues(struct nvme_loop_ctrl *ctrl)
> {
> - int ret;
> + static struct lock_class_key nvme_loop_flush_lock_key;
> + int ret, i;
> + struct blk_mq_hw_ctx *hctx;
>
> ret = nvme_loop_init_io_queues(ctrl);
> if (ret)
> @@ -553,6 +555,10 @@ static int nvme_loop_create_io_queues(struct nvme_loop_ctrl *ctrl)
> goto out_free_tagset;
> }
>
> + queue_for_each_hw_ctx(ctrl->ctrl.connect_q, hctx, i)
> + lockdep_set_class(&hctx->fq->mq_flush_lock,
> + &nvme_loop_flush_lock_key);
> +
> ret = nvme_loop_connect_io_queues(ctrl);
> if (ret)
> goto out_cleanup_connect_q;
Hi Ming,
Thanks for your feedback.
Are you aware that sizeof(struct lock_class_key) is zero if lockdep is
disabled? Does this alleviate your concern?
I'm not enthusiast about your patch. I don't think that block layer users
should touch the lock class key of the flush queue. That's a key that should
be set by the block layer core.
Anyway, let's drop patches 20/23 and 21/23 from this series and let's
continue this discussion on the linux-block mailing list after agreement has
been reached about the "dynamic lockdep key" approach.
Bart.
Powered by blists - more mailing lists