lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 28 Jul 2021 21:41:37 +0800
From:   Feng Tang <feng.tang@...el.com>
To:     Michal Hocko <mhocko@...e.com>
Cc:     linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
        David Rientjes <rientjes@...gle.com>,
        Dave Hansen <dave.hansen@...el.com>,
        Ben Widawsky <ben.widawsky@...el.com>,
        linux-kernel@...r.kernel.org, linux-api@...r.kernel.org,
        Andrea Arcangeli <aarcange@...hat.com>,
        Mel Gorman <mgorman@...hsingularity.net>,
        Mike Kravetz <mike.kravetz@...cle.com>,
        Randy Dunlap <rdunlap@...radead.org>,
        Vlastimil Babka <vbabka@...e.cz>,
        Andi Kleen <ak@...ux.intel.com>,
        Dan Williams <dan.j.williams@...el.com>, ying.huang@...el.com
Subject: Re: [PATCH v6 5/6] mm/mempolicy: Advertise new MPOL_PREFERRED_MANY

On Wed, Jul 28, 2021 at 02:47:23PM +0200, Michal Hocko wrote:
> On Mon 12-07-21 16:09:33, Feng Tang wrote:
> > From: Ben Widawsky <ben.widawsky@...el.com>
> > 
> > Adds a new mode to the existing mempolicy modes, MPOL_PREFERRED_MANY.
> > 
> > MPOL_PREFERRED_MANY will be adequately documented in the internal
> > admin-guide with this patch. Eventually, the man pages for mbind(2),
> > get_mempolicy(2), set_mempolicy(2) and numactl(8) will also have text
> > about this mode. Those shall contain the canonical reference.
> > 
> > NUMA systems continue to become more prevalent. New technologies like
> > PMEM make finer grain control over memory access patterns increasingly
> > desirable. MPOL_PREFERRED_MANY allows userspace to specify a set of
> > nodes that will be tried first when performing allocations. If those
> > allocations fail, all remaining nodes will be tried. It's a straight
> > forward API which solves many of the presumptive needs of system
> > administrators wanting to optimize workloads on such machines. The mode
> > will work either per VMA, or per thread.
> > 
> > Link: https://lore.kernel.org/r/20200630212517.308045-13-ben.widawsky@intel.com
> > Signed-off-by: Ben Widawsky <ben.widawsky@...el.com>
> > Signed-off-by: Feng Tang <feng.tang@...el.com>
> > ---
> >  Documentation/admin-guide/mm/numa_memory_policy.rst | 16 ++++++++++++----
> >  mm/mempolicy.c                                      |  7 +------
> >  2 files changed, 13 insertions(+), 10 deletions(-)
> > 
> > diff --git a/Documentation/admin-guide/mm/numa_memory_policy.rst b/Documentation/admin-guide/mm/numa_memory_policy.rst
> > index 067a90a1499c..cd653561e531 100644
> > --- a/Documentation/admin-guide/mm/numa_memory_policy.rst
> > +++ b/Documentation/admin-guide/mm/numa_memory_policy.rst
> > @@ -245,6 +245,14 @@ MPOL_INTERLEAVED
> >  	address range or file.  During system boot up, the temporary
> >  	interleaved system default policy works in this mode.
> >  
> > +MPOL_PREFERRED_MANY
> > +        This mode specifies that the allocation should be attempted from the
> > +        nodemask specified in the policy. If that allocation fails, the kernel
> > +        will search other nodes, in order of increasing distance from the first
> > +        set bit in the nodemask based on information provided by the platform
> > +        firmware. It is similar to MPOL_PREFERRED with the main exception that
> > +        is an error to have an empty nodemask.
> 
> I believe the target audience of this documents are users rather than
> kernel developers and for those the wording might be rather cryptic. I
> would rephrase like this
> 	This mode specifices that the allocation should be preferrably
> 	satisfied from the nodemask specified in the policy. If there is
> 	a memory pressure on all nodes in the nodemask the allocation
> 	can fall back to all existing numa nodes. This is effectively
> 	MPOL_PREFERRED allowed for a mask rather than a single node.
> 
> With that or similar feel free to add
> Acked-by: Michal Hocko <mhocko@...e.com>

Thanks!

Will revise the test as suggested.

- Feng

> -- 
> Michal Hocko
> SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ