[<prev] [next>] [day] [month] [year] [list]
Message-ID: <56B749A1.8030504@dev.mellanox.co.il>
Date: Sun, 7 Feb 2016 15:41:53 +0200
From: Sagi Grimberg <sagig@....mellanox.co.il>
To: Wenbo Wang <wenbo.wang@...blaze.com>,
Keith Busch <keith.busch@...el.com>
Cc: Jens Axboe <axboe@...com>, "Wenwei.Tao" <wenwei.tao@...blaze.com>,
Wenbo Wang <mail_weber_wang@....com>,
"linux-nvme@...ts.infradead.org" <linux-nvme@...ts.infradead.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] NVMe: do not touch sq door bell if nvmeq has been
suspended
> Keith,
>
> Is the following solution OK?
> synchronize_rcu guarantee that no queue_rq is running concurrently with device disable code.
> Together with your another patch (adding blk_sync_queue), both sync/async path shall be handled correctly.
This can be acceptable I think.
> Do you think synchronize_rcu shall be added to blk_sync_queue?
Or, we'll add it to blk_mq_stop_hw_queues() and then scsi
will enjoy it as well.
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index 4c0622f..bfe9132 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -865,7 +865,9 @@ void blk_mq_run_hw_queue(struct blk_mq_hw_ctx *hctx, bool async)
> if (!async) {
> int cpu = get_cpu();
> if (cpumask_test_cpu(cpu, hctx->cpumask)) {
> + rcu_read_lock();
> __blk_mq_run_hw_queue(hctx);
> + rcu_read_unlock();
I think the rcu is better folded into __blk_mq_run_hw_queue
to cover all the call sites.
Powered by blists - more mailing lists