[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a2eff7fd-5670-8c15-a72a-589fe7d99f31@grimberg.me>
Date: Thu, 27 Apr 2023 15:21:30 +0300
From: Sagi Grimberg <sagi@...mberg.me>
To: Hannes Reinecke <hare@...e.de>, Li Feng <fengli@...rtx.com>
Cc: Keith Busch <kbusch@...nel.org>, Jens Axboe <axboe@...com>,
Christoph Hellwig <hch@....de>,
"open list:NVM EXPRESS DRIVER" <linux-nvme@...ts.infradead.org>,
linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] nvme/tcp: Add support to set the tcp worker cpu affinity
>>> Not saying that this should be a solution though.
>>>
>>> How many queues does your controller support that you happen to use
>>> queue 0 ?
>> Our controller only support one io queue currently.
>
> Ouch.
> Remember, NVMe gets most of the performance improvements by using
> several queues, and be able to bind the queues to cpu sets.
> Exposing just one queue will be invalidating any assumptions we do,
> and trying to improve interrupt steering won't work anyway.
>
> I sincerely doubt we should try to 'optimize' for this rather peculiar
> setup.
I tend to agree. This is not a common setup that I'm particularly
interested in exporting something dedicated in the driver for fiddling
with it...
Powered by blists - more mailing lists