[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <fe49daac-5990-464a-aeeb-c7c5f9d4d156@grimberg.me>
Date: Tue, 22 Oct 2024 16:23:29 +0300
From: Sagi Grimberg <sagi@...mberg.me>
To: Ming Lei <ming.lei@...hat.com>
Cc: zhuxiaohui <zhuxiaohui400@...il.com>, axboe@...nel.dk, kbusch@...nel.org,
hch@....de, linux-block@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-nvme@...ts.infradead.org, Zhu Xiaohui <zhuxiaohui.400@...edance.com>
Subject: Re: [PATCH v1] blk-mq: add one blk_mq_req_flags_t type to support mq
ctx fallback
>
>>> It is just lucky for connection request because IO isn't started
>>> yet at that time, and the allocation always succeeds in the 1st try of
>>> __blk_mq_get_tag().
>> It's not lucky, we reserve a per-queue tag for exactly this flow (connect)
>> so we
>> always have one available. And when the connect is running, the driver
>> should
>> guarantee nothing else is running.
> What if there is multiple concurrent allocation(reserve) requests?
There can't be none.
> You still
> may run into allocation from other hw queue. In reality, nvme may don't
> use in that way, but as one API, it is still not good, or at least the
> behavior should be documented.
I agree. NVMe may have a unique need here, but it needs a tag from a
specific hctx while the context requesting it does not map according to
the hctx cpumap. It cannot use any other tag from any other hctx.
The reason is that the connect for a queue must be done from a tag that
belongs to the queue because nvme relies on it when it does resolution
back to
the request to the completion.
Powered by blists - more mailing lists