lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 26 Nov 2018 17:10:18 -0800
From:   Dan Williams <dan.j.williams@...el.com>
To:     alexander.h.duyck@...ux.intel.com
Cc:     Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Greg KH <gregkh@...uxfoundation.org>,
        linux-nvdimm <linux-nvdimm@...ts.01.org>,
        Tejun Heo <tj@...nel.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Linux-pm mailing list <linux-pm@...r.kernel.org>,
        jiangshanlai@...il.com, "Rafael J. Wysocki" <rafael@...nel.org>,
        "Brown, Len" <len.brown@...el.com>, Pavel Machek <pavel@....cz>,
        zwisler@...nel.org, Dave Jiang <dave.jiang@...el.com>,
        bvanassche@....org
Subject: Re: [driver-core PATCH v6 2/9] async: Add support for queueing on
 specific NUMA node

On Thu, Nov 8, 2018 at 10:06 AM Alexander Duyck
<alexander.h.duyck@...ux.intel.com> wrote:
>
> Introduce four new variants of the async_schedule_ functions that allow
> scheduling on a specific NUMA node.
>
> The first two functions are async_schedule_near and
> async_schedule_near_domain end up mapping to async_schedule and
> async_schedule_domain, but provide NUMA node specific functionality. They
> replace the original functions which were moved to inline function
> definitions that call the new functions while passing NUMA_NO_NODE.
>
> The second two functions are async_schedule_dev and
> async_schedule_dev_domain which provide NUMA specific functionality when
> passing a device as the data member and that device has a NUMA node other
> than NUMA_NO_NODE.
>
> The main motivation behind this is to address the need to be able to
> schedule device specific init work on specific NUMA nodes in order to
> improve performance of memory initialization.

What Andrew tends to due when an enhancement is spread over multiple
patches is to duplicate the motivation in each patch. So here you
could include the few sentences you wrote about the performance
benefits of this work:

"What I have seen on several systems is a pretty significant improvement
in initialization time for persistent memory. In the case of 3TB of
memory being initialized on a single node the improvement in the worst
case was from about 36s down to 26s for total initialization time."

...and conclude that the data shows a general benefit for affinitizing
async work to a specific numa node.

With that changelog clarification:

Reviewed-by: Dan Williams <dan.j.williams@...el.com>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ