lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <11006dd2-718f-b569-4ef3-c01232354d5f@deltatee.com>
Date:   Fri, 25 Oct 2019 15:12:28 -0600
From:   Logan Gunthorpe <logang@...tatee.com>
To:     Sagi Grimberg <sagi@...mberg.me>, linux-kernel@...r.kernel.org,
        linux-nvme@...ts.infradead.org
Cc:     Chaitanya Kulkarni <Chaitanya.Kulkarni@....com>,
        Stephen Bates <sbates@...thlin.com>,
        Keith Busch <kbusch@...nel.org>,
        Max Gurtovoy <maxg@...lanox.com>,
        Christoph Hellwig <hch@....de>
Subject: Re: [RFC PATCH 3/3] nvme: Introduce nvme_execute_passthru_rq_nowait()



On 2019-10-25 2:41 p.m., Sagi Grimberg wrote:
>> +#ifdef CONFIG_NVME_TARGET_PASSTHRU
>> +static void nvme_execute_passthru_rq_work(struct work_struct *w)
>> +{
>> +    struct nvme_request *req = container_of(w, struct nvme_request,
>> work);
>> +    struct request *rq = blk_mq_rq_from_pdu(req);
>> +    rq_end_io_fn *done = rq->end_io;
>> +    void *end_io_data = rq->end_io_data;
> 
> Why is end_io_data stored to a local variable here? where is it set?

blk_execute_rq() (which is called by nvme_execute_rq()) will overwrite
rq->endio and rq->end_io_data. We store them here so we can call the
callback appropriately after the request completes. It would be set by
the caller in the same way they set it if they were calling
blk_execute_rq_nowait().

>> +
>> +    nvme_execute_passthru_rq(rq);
>> +
>> +    if (done) {
>> +        rq->end_io_data = end_io_data;
>> +        done(rq, 0);
>> +    }
>> +}
>> +
>> +void nvme_execute_passthru_rq_nowait(struct request *rq, rq_end_io_fn
>> *done)
>> +{
>> +    struct nvme_command *cmd = nvme_req(rq)->cmd;
>> +    struct nvme_ctrl *ctrl = nvme_req(rq)->ctrl;
>> +    struct nvme_ns *ns = rq->q->queuedata;
>> +    struct gendisk *disk = ns ? ns->disk : NULL;
>> +    u32 effects;
>> +
>> +    /*
>> +     * This function may be called in interrupt context, so we cannot
>> sleep
>> +     * but nvme_passthru_[start|end]() may sleep so we need to execute
>> +     * the command in a work queue.
>> +     */
>> +    effects = nvme_command_effects(ctrl, ns, cmd->common.opcode);
>> +    if (effects) {
>> +        rq->end_io = done;
>> +        INIT_WORK(&nvme_req(rq)->work, nvme_execute_passthru_rq_work);
>> +        queue_work(nvme_wq, &nvme_req(rq)->work);
> 
> This work will need to be flushed when in nvme_stop_ctrl. That is
> assuming that it will fail-fast and not hang (which it should given
> that its a passthru command that is allocated via nvme_alloc_request).

Hmm, that's going to be a bit tricky. Seeing the work_struct belongs
potentially to a number of different requests, we can't just flush the
individual work items. I think we'd have to create a work-queue per ctrl
and flush that. Any objections to that?

Logan

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ