[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0f15a932-1a42-4c51-a267-3f765866edc4@suse.de>
Date: Fri, 19 Jul 2024 08:28:25 +0200
From: Hannes Reinecke <hare@...e.de>
To: Christoph Hellwig <hch@....de>, Ping Gan <jacky_gam_2001@....com>
Cc: sagi@...mberg.me, kch@...dia.com, linux-nvme@...ts.infradead.org,
linux-kernel@...r.kernel.org, ping.gan@...l.com
Subject: Re: [PATCH v2 0/2] nvmet: support unbound_wq for RDMA and TCP
On 7/19/24 07:31, Christoph Hellwig wrote:
> On Wed, Jul 17, 2024 at 05:14:49PM +0800, Ping Gan wrote:
>> When running nvmf on SMP platform, current nvme target's RDMA and
>> TCP use bounded workqueue to handle IO, but when there is other high
>> workload on the system(eg: kubernetes), the competition between the
>> bounded kworker and other workload is very radical. To decrease the
>> resource race of OS among them, this patchset will enable unbounded
>> workqueue for nvmet-rdma and nvmet-tcp; besides that, it can also
>> get some performance improvement. And this patchset bases on previous
>> discussion from below session.
>
> So why aren't we using unbound workqueues by default? Who makea the
> policy decision and how does anyone know which one to chose?
>
I'd be happy to switch to unbound workqueues per default.
It actually might be a left over from the various workqueue changes;
at one point 'unbound' meant that effectively only one CPU was used
for the workqueue, and you had to remove the 'unbound' parameter to
have the workqueue run on all CPUs. That has since changed, so I guess
switching to unbound per default is the better option here.
Cheers,
Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
hare@...e.de +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich
Powered by blists - more mailing lists