[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240719080723.125046-1-jacky_gam_2001@163.com>
Date: Fri, 19 Jul 2024 16:07:22 +0800
From: Ping Gan <jacky_gam_2001@....com>
To: hare@...e.de,
hch@....de
Cc: ping.gan@...l.com,
sagi@...mberg.me,
kch@...dia.com,
linux-nvme@...ts.infradead.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 0/2] nvmet: support unbound_wq for RDMA and TCP
> On 7/19/24 07:31, Christoph Hellwig wrote:
>> On Wed, Jul 17, 2024 at 05:14:49PM +0800, Ping Gan wrote:
>>> When running nvmf on SMP platform, current nvme target's RDMA and
>>> TCP use bounded workqueue to handle IO, but when there is other high
>>> workload on the system(eg: kubernetes), the competition between the
>>> bounded kworker and other workload is very radical. To decrease the
>>> resource race of OS among them, this patchset will enable unbounded
>>> workqueue for nvmet-rdma and nvmet-tcp; besides that, it can also
>>> get some performance improvement. And this patchset bases on
>>> previous
>>> discussion from below session.
>>
>> So why aren't we using unbound workqueues by default? Who makea the
>> policy decision and how does anyone know which one to chose?
>>
> I'd be happy to switch to unbound workqueues per default.
> It actually might be a left over from the various workqueue changes;
> at one point 'unbound' meant that effectively only one CPU was used
> for the workqueue, and you had to remove the 'unbound' parameter to
> have the workqueue run on all CPUs. That has since changed, so I guess
> switching to unbound per default is the better option here.
I don't fully understand what you said 'by default'. Did you mean we
should just remove 'unbounded' parameter and create workqueue by
WQ_UNBOUND flag or besides that, we should also add other parameter
to switch 'unbounded' workqueue to 'bounded' workqueue?
Thanks,
Ping
Powered by blists - more mailing lists