[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YQ047Gcakj2scjNK@dhcp22.suse.cz>
Date: Fri, 6 Aug 2021 15:28:12 +0200
From: Michal Hocko <mhocko@...e.com>
To: Feng Tang <feng.tang@...el.com>
Cc: linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
David Rientjes <rientjes@...gle.com>,
Dave Hansen <dave.hansen@...el.com>,
Ben Widawsky <ben.widawsky@...el.com>,
linux-kernel@...r.kernel.org, linux-api@...r.kernel.org,
Andrea Arcangeli <aarcange@...hat.com>,
Mel Gorman <mgorman@...hsingularity.net>,
Mike Kravetz <mike.kravetz@...cle.com>,
Randy Dunlap <rdunlap@...radead.org>,
Vlastimil Babka <vbabka@...e.cz>,
Andi Kleen <ak@...ux.intel.com>,
Dan Williams <dan.j.williams@...el.com>, ying.huang@...el.com,
Dave Hansen <dave.hansen@...ux.intel.com>
Subject: Re: [PATCH v7 1/5] mm/mempolicy: Add MPOL_PREFERRED_MANY for
multiple preferred nodes
On Tue 03-08-21 13:59:18, Feng Tang wrote:
> From: Dave Hansen <dave.hansen@...ux.intel.com>
>
> The NUMA APIs currently allow passing in a "preferred node" as a
> single bit set in a nodemask. If more than one bit it set, bits
> after the first are ignored.
>
> This single node is generally OK for location-based NUMA where
> memory being allocated will eventually be operated on by a single
> CPU. However, in systems with multiple memory types, folks want
> to target a *type* of memory instead of a location. For instance,
> someone might want some high-bandwidth memory but do not care about
> the CPU next to which it is allocated. Or, they want a cheap,
> high capacity allocation and want to target all NUMA nodes which
> have persistent memory in volatile mode. In both of these cases,
> the application wants to target a *set* of nodes, but does not
> want strict MPOL_BIND behavior as that could lead to OOM killer or
> SIGSEGV.
>
> So add MPOL_PREFERRED_MANY policy to support the multiple preferred
> nodes requirement. This is not a pie-in-the-sky dream for an API.
> This was a response to a specific ask of more than one group at Intel.
> Specifically:
>
> 1. There are existing libraries that target memory types such as
> https://github.com/memkind/memkind. These are known to suffer
> from SIGSEGV's when memory is low on targeted memory "kinds" that
> span more than one node. The MCDRAM on a Xeon Phi in "Cluster on
> Die" mode is an example of this.
> 2. Volatile-use persistent memory users want to have a memory policy
> which is targeted at either "cheap and slow" (PMEM) or "expensive and
> fast" (DRAM). However, they do not want to experience allocation
> failures when the targeted type is unavailable.
> 3. Allocate-then-run. Generally, we let the process scheduler decide
> on which physical CPU to run a task. That location provides a
> default allocation policy, and memory availability is not generally
> considered when placing tasks. For situations where memory is
> valuable and constrained, some users want to allocate memory first,
> *then* allocate close compute resources to the allocation. This is
> the reverse of the normal (CPU) model. Accelerators such as GPUs
> that operate on core-mm-managed memory are interested in this model.
>
> A check is added in sanitize_mpol_flags() to not permit 'prefer_many'
> policy to be used for now, and will be removed in later patch after all
> implementations for 'prefer_many' are ready, as suggested by Michal Hocko.
>
> [Michal Hocko: suggest to refine policy_node/policy_nodemask handling]
> Link: https://lore.kernel.org/r/20200630212517.308045-4-ben.widawsky@intel.com
> Co-developed-by: Ben Widawsky <ben.widawsky@...el.com>
> Signed-off-by: Ben Widawsky <ben.widawsky@...el.com>
> Signed-off-by: Dave Hansen <dave.hansen@...ux.intel.com>
> Signed-off-by: Feng Tang <feng.tang@...el.com>
Acked-by: Michal Hocko <mhocko@...e.com>
Thanks!
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists