lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Thu, 13 Jun 2024 13:38:18 -0700
From: "Nelson, Shannon" <shannon.nelson@....com>
To: Jakub Kicinski <kuba@...nel.org>
Cc: netdev@...r.kernel.org, davem@...emloft.net, edumazet@...gle.com,
 pabeni@...hat.com, brett.creeley@....com, drivers@...sando.io
Subject: Re: [PATCH net-next 3/8] ionic: add private workqueue per-device

On 6/12/2024 6:08 PM, Jakub Kicinski wrote:
> Caution: This message originated from an External Source. Use proper caution when opening attachments, clicking links, or responding.
> 
> On Mon, 10 Jun 2024 16:07:01 -0700 Shannon Nelson wrote:
>> Instead of using the system's default workqueue,
>> add a private workqueue for the device to use for
>> its little jobs.
> 
> little jobs little point of having your own wq, no?
> At this point of reading the series its a bit unclear why
> the wq separation is needed.

Yes, when using only a single PF or two this doesn't look so bad to be 
left on the system workqueue.  But we have a couple customers that want 
to scale out to lots of VFs with multiple queues per VF, which 
multiplies out 100's of queues getting workitems.  We thought that 
instead of firebombing the system workqueue with a lot of little jobs, 
we would give the scheduler a chance to work with our stuff separately, 
and setting it up by device seemed like an easy enough way to partition 
the work.  Other options might be one ionic wq used by all devices, or 
maybe a wq per PF family?  Do you have a preference, or do you still 
think that the system wq is enough?

Thanks,
sln

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ