[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e1d773e0-8964-dded-fbc1-b9f0a39bab8c@linux.alibaba.com>
Date: Wed, 21 Sep 2022 11:40:43 +0800
From: Liu Song <liusong@...ux.alibaba.com>
To: Christoph Hellwig <hch@....de>, Jens Axboe <axboe@...nel.dk>
Cc: kbusch@...nel.org, sagi@...mberg.me,
linux-nvme@...ts.infradead.org, linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH] nvme: request remote is usually not involved for nvme
devices
On 2022/9/19 22:35, Christoph Hellwig wrote:
> On Mon, Sep 19, 2022 at 08:10:31AM -0600, Jens Axboe wrote:
>> I'm not disagreeing with any of that, my point is just that you're
>> hacking around this in the nvme driver. This is problematic whenever
>> core changes happen, because now we have to touch individual drivers.
>> While the expectation is that there are no remote IPI completions for
>> NVMe, queue starved devices do exist and those do see remote
>> completions.
>>
>> This optimization belongs in the blk-mq core, not in nvme. I do think it
>> makes sense, you just need to solve it in blk-mq rather than in the nvme
>> driver.
> I'd also really see solid numbers to justify it.
>
> And btw, having more than one core per queue is quite common in
> nvme. Even many enterprise SSDs only have 64 queues, and some of
> the low-end consumers ones have very low number of queues that are
> not enough for the number of cores in even mid-end desktop CPUs.
Hi
Thank you for your suggestion. Here is what I think about it. NVMe
devices that can support
one core per queue are usually high-performance, so I prefer to make
more optimizations
for high-performance devices, while for ordinary consumption class NVMe
devices, such
modifications will not bring special additional overhead.
Thanks
Powered by blists - more mailing lists