lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 26 Sep 2018 15:19:21 -0700
From:   Alexander Duyck <alexander.h.duyck@...ux.intel.com>
To:     Tejun Heo <tj@...nel.org>
Cc:     linux-nvdimm@...ts.01.org, gregkh@...uxfoundation.org,
        linux-pm@...r.kernel.org, linux-kernel@...r.kernel.org,
        akpm@...ux-foundation.org, len.brown@...el.com,
        dave.jiang@...el.com, rafael@...nel.org, vishal.l.verma@...el.com,
        jiangshanlai@...il.com, pavel@....cz, zwisler@...nel.org,
        dan.j.williams@...el.com
Subject: Re: [RFC workqueue/driver-core PATCH 1/5] workqueue: Provide
 queue_work_near to queue work near a given NUMA node

On 9/26/2018 3:09 PM, Tejun Heo wrote:
> Hello,
> 
> On Wed, Sep 26, 2018 at 03:05:17PM -0700, Alexander Duyck wrote:
>> I am using unbound workqueues. However there isn't an interface that
>> exposes the NUMA bits of them directly. All I am doing with this
>> patch is adding "queue_work_near" which takes a NUMA node as an
>> argument and then copies the logic of "queue_work_on" with the
>> exception that I am doing a check to verify that there is an
>> intersection between wq_unbound_cpumask and the cpumask of the node,
>> and then passing a CPU from that intersection into "__queue_work".
> 
> Can it just take a cpu id and not feed that to __queue_work()?  That
> looks like a lot of extra logic.
> 
> Thanks.

I could just use queue_work_on probably, but is there any issue if I am 
passing CPU values that are not in the wq_unbound_cpumask? That was 
mostly my concern. Also for an unbound queue do I need to worry about 
the hotplug lock? I wasn't sure if that was the case or not as I know it 
is called out as something to be concerned with using queue_work_on, but 
in __queue_work the value is just used to determine which node to grab a 
work queue from.

I forgot to address your question about the advantages. They are pretty 
significant. The test system I was working with was initializing 3TB of 
nvdimm memory per node. If the node is aligned it takes something like 
24 seconds, whereas an unaligned core can take 36 seconds or more.

Thanks.

- Alex

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ