lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b3592bca-015f-451e-b689-8db820cbc7a6@grimberg.me>
Date: Sat, 27 Jul 2024 22:36:13 +0300
From: Sagi Grimberg <sagi@...mberg.me>
To: Ping Gan <jacky_gam_2001@....com>, hare@...e.de, hch@....de
Cc: ping.gan@...l.com, kch@...dia.com, linux-nvme@...ts.infradead.org,
 linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 0/2] nvmet: use unbound_wq for RDMA and TCP by default




On 26/07/2024 5:34, Ping Gan wrote:
>> On 19/07/2024 12:19, Ping Gan wrote:
>>> When running nvmf on SMP platform, current nvme target's RDMA and
>>> TCP use bounded workqueue to handle IO, but when there is other high
>>> workload on the system(eg: kubernetes), the competition between the
>>> bounded kworker and other workload is very radical. To decrease the
>>> resource race of OS among them, this patchset will switch to
>>> unbounded
>>> workqueue for nvmet-rdma and nvmet-tcp; besides that, it can also
>>> get some performance improvement. And this patchset bases on previous
>>> discussion from below session.
>>>
>>> https://lore.kernel.org/lkml/20240719084953.8050-1-jacky_gam_2001@163.com/
>> Hold your horses.
>>
>> This cannot be just switched without a thorough testing and actual
>> justification/proof of
>> a benefit beyond just a narrow use-case brought initially by Ping Gan.
>>
>> If the ask is to universally use an unbound workqueue, please provide
>> detailed
>> benchmarking convincing us that this makes sense.
> So you think we should not do a radical change for the narrow usecase
> but
> keep the parameter to enable it in previous version patch, right?

What I'm saying is that if you want to change the default, please provide
justification in the form of benchmarks that support the change. This
benchmarks should include both throughput, iops and latency measurements
and without the cpu-set constraints you presented originally.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ