lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <c756aab1-b546-c345-4299-205205ca4cb4@oracle.com>
Date:   Wed, 27 Mar 2019 15:18:25 +0800
From:   "jianchao.wang" <jianchao.w.wang@...cle.com>
To:     Keith Busch <kbusch@...nel.org>
Cc:     Jens Axboe <axboe@...nel.dk>,
        linux-block <linux-block@...r.kernel.org>,
        James Smart <jsmart2021@...il.com>,
        Bart Van Assche <bvanassche@....org>,
        Ming Lei <tom.leiming@...il.com>,
        Josef Bacik <josef@...icpanda.com>,
        linux-nvme <linux-nvme@...ts.infradead.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        "Busch, Keith" <keith.busch@...el.com>,
        Hannes Reinecke <hare@...e.de>,
        Johannes Thumshirn <jthumshirn@...e.de>,
        Christoph Hellwig <hch@....de>,
        Sagi Grimberg <sagi@...mberg.me>
Subject: Re: [PATCH V2 7/8] nvme: use blk_mq_queue_tag_inflight_iter

Hi Keith

On 3/27/19 2:51 PM, Keith Busch wrote:
> On Wed, Mar 27, 2019 at 10:45:33AM +0800, jianchao.wang wrote:
>> 1. a hctx->fq.flush_rq of dead request_queue that shares the same tagset
>>    The whole request_queue is cleaned up and freed, so the hctx->fq.flush is freed back to a slab
>>
>> 2. a removed io scheduler's sched request
>>    The io scheduled is detached and all of the structures are freed, including the pages where sched
>>    requests locates.
>>
>> So the pointers in tags->rqs[] may point to memory that is not used as a blk layer request.
> 
> Oh, free as in kfree'd, not blk_mq_free_request. So it's a read-after-
> free that you're concerned about, not that anyone explicitly changed a
> request->state.

Yes ;)

> 
> We at least can't free the flush_queue until the queue is frozen. If the
> queue is frozen, we've completed the special fq->flush_rq where its end_io
> replaces tags->rqs[tag] back to the fq->orig_rq from the static_rqs,
> so nvme's iterator couldn't see the fq->flush_rq address if it's invalid.
> 

This is true for the non io-scheduler case in which the flush_rq would steal the driver tag.
But for io-scheduler case, flush_rq would acquire a driver tag itself.


> The sched_tags concern, though, appears theoretically possible.
> 

Thanks
Jianchao

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ