[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190326235726.GC4328@localhost.localdomain>
Date: Tue, 26 Mar 2019 17:57:27 -0600
From: Keith Busch <kbusch@...nel.org>
To: "jianchao.wang" <jianchao.w.wang@...cle.com>
Cc: Ming Lei <tom.leiming@...il.com>, Jens Axboe <axboe@...nel.dk>,
"Busch, Keith" <keith.busch@...el.com>,
James Smart <jsmart2021@...il.com>,
Bart Van Assche <bvanassche@....org>,
Josef Bacik <josef@...icpanda.com>,
linux-nvme <linux-nvme@...ts.infradead.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
linux-block <linux-block@...r.kernel.org>,
Hannes Reinecke <hare@...e.de>,
Johannes Thumshirn <jthumshirn@...e.de>,
Christoph Hellwig <hch@....de>,
Sagi Grimberg <sagi@...mberg.me>
Subject: Re: [PATCH V2 7/8] nvme: use blk_mq_queue_tag_inflight_iter
On Mon, Mar 25, 2019 at 08:05:53PM -0700, jianchao.wang wrote:
> What if there used to be a io scheduler and leave some stale requests of sched tags ?
> Or the nr_hw_queues was decreased and leave the hctx->fq->flush_rq ?
Requests internally queued in scheduler or block layer are not eligible
for the nvme driver's iterator callback. We only use it to reclaim
dispatched requests that the target can't return, which only applies to
requests that must have a valid rq->tag value from hctx->tags.
> The stable request could be some tings freed and used
> by others and the state field happen to be overwritten to non-zero...
I am not sure I follow what this means. At least for nvme, every queue
sharing the same tagset is quiesced and frozen, there should be no
request state in flux at the time we iterate.
Powered by blists - more mailing lists