[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190327065156.GC7389@localhost.localdomain>
Date: Wed, 27 Mar 2019 00:51:56 -0600
From: Keith Busch <kbusch@...nel.org>
To: "jianchao.wang" <jianchao.w.wang@...cle.com>
Cc: Jens Axboe <axboe@...nel.dk>,
linux-block <linux-block@...r.kernel.org>,
James Smart <jsmart2021@...il.com>,
Bart Van Assche <bvanassche@....org>,
Ming Lei <tom.leiming@...il.com>,
Josef Bacik <josef@...icpanda.com>,
linux-nvme <linux-nvme@...ts.infradead.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
"Busch, Keith" <keith.busch@...el.com>,
Hannes Reinecke <hare@...e.de>,
Johannes Thumshirn <jthumshirn@...e.de>,
Christoph Hellwig <hch@....de>,
Sagi Grimberg <sagi@...mberg.me>
Subject: Re: [PATCH V2 7/8] nvme: use blk_mq_queue_tag_inflight_iter
On Wed, Mar 27, 2019 at 10:45:33AM +0800, jianchao.wang wrote:
> 1. a hctx->fq.flush_rq of dead request_queue that shares the same tagset
> The whole request_queue is cleaned up and freed, so the hctx->fq.flush is freed back to a slab
>
> 2. a removed io scheduler's sched request
> The io scheduled is detached and all of the structures are freed, including the pages where sched
> requests locates.
>
> So the pointers in tags->rqs[] may point to memory that is not used as a blk layer request.
Oh, free as in kfree'd, not blk_mq_free_request. So it's a read-after-
free that you're concerned about, not that anyone explicitly changed a
request->state.
We at least can't free the flush_queue until the queue is frozen. If the
queue is frozen, we've completed the special fq->flush_rq where its end_io
replaces tags->rqs[tag] back to the fq->orig_rq from the static_rqs,
so nvme's iterator couldn't see the fq->flush_rq address if it's invalid.
The sched_tags concern, though, appears theoretically possible.
Powered by blists - more mailing lists