[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <7030395b-9f18-6866-10c0-906788243aa1@grimberg.me>
Date: Mon, 28 Oct 2019 14:04:48 -0700
From: Sagi Grimberg <sagi@...mberg.me>
To: Logan Gunthorpe <logang@...tatee.com>,
Christoph Hellwig <hch@....de>
Cc: linux-kernel@...r.kernel.org, linux-nvme@...ts.infradead.org,
Keith Busch <kbusch@...nel.org>,
Chaitanya Kulkarni <Chaitanya.Kulkarni@....com>,
Max Gurtovoy <maxg@...lanox.com>,
Stephen Bates <sbates@...thlin.com>
Subject: Re: [RFC PATCH 3/3] nvme: Introduce nvme_execute_passthru_rq_nowait()
>>> +void nvme_execute_passthru_rq_nowait(struct request *rq, rq_end_io_fn *done)
>>> +{
>>> + struct nvme_command *cmd = nvme_req(rq)->cmd;
>>> + struct nvme_ctrl *ctrl = nvme_req(rq)->ctrl;
>>> + struct nvme_ns *ns = rq->q->queuedata;
>>> + struct gendisk *disk = ns ? ns->disk : NULL;
>>> + u32 effects;
>>> +
>>> + /*
>>> + * This function may be called in interrupt context, so we cannot sleep
>>> + * but nvme_passthru_[start|end]() may sleep so we need to execute
>>> + * the command in a work queue.
>>> + */
>>> + effects = nvme_command_effects(ctrl, ns, cmd->common.opcode);
>>> + if (effects) {
>>> + rq->end_io = done;
>>> + INIT_WORK(&nvme_req(rq)->work, nvme_execute_passthru_rq_work);
>>> + queue_work(nvme_wq, &nvme_req(rq)->work);
>>
>> But independent of the target code - I'd much rather leave this to the
>> caller. Just call nvme_command_effects in the target code, then if
>> there are not side effects use blk_execute_rq_nowait directly, else
>> schedule a workqueue in the target code and call
>> nvme_execute_passthru_rq from it.
>
> Ok, that seems sensible. Except it conflicts a bit with Sagi's feedback:
> presumably we need to cancel the work items during nvme_stop_ctrl() and
> that's going to be rather difficult to do from the caller. Are we saying
> this is unnecessary? It's not clear to me if passthru_start/end is going
> to be affected by nvme_stop_ctrl() which I believe is the main concern.
Actually, I don't think we need it thinking on it again... These are
just I/Os sent to the device. The reset sequence will simply iterate
all the I/Os and fail the busy ones, and those that will execute after
it will block on a frozen queue, just like any other I/O. So I don't
think we need to cancel them. And if this logic sits on the caller its
even clearer that this is the case.
However, it'd be good to test live controller resets to make sure
we are not missing anything...
Powered by blists - more mailing lists