[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHbLzkqg++ENAEPdd+UY8Q5X0CuvbHC+JFAvYi2KLaS+2=q3_A@mail.gmail.com>
Date: Wed, 8 Jun 2022 09:42:39 -0700
From: Yang Shi <shy828301@...il.com>
To: Aneesh Kumar K V <aneesh.kumar@...ux.ibm.com>
Cc: Linux MM <linux-mm@...ck.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Wei Xu <weixugc@...gle.com>, Huang Ying <ying.huang@...el.com>,
Greg Thelen <gthelen@...gle.com>,
Davidlohr Bueso <dave@...olabs.net>,
Tim C Chen <tim.c.chen@...el.com>,
Brice Goglin <brice.goglin@...il.com>,
Michal Hocko <mhocko@...nel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Hesham Almatary <hesham.almatary@...wei.com>,
Dave Hansen <dave.hansen@...el.com>,
Jonathan Cameron <Jonathan.Cameron@...wei.com>,
Alistair Popple <apopple@...dia.com>,
Dan Williams <dan.j.williams@...el.com>,
Feng Tang <feng.tang@...el.com>,
Jagdish Gediya <jvgediya@...ux.ibm.com>,
Baolin Wang <baolin.wang@...ux.alibaba.com>,
David Rientjes <rientjes@...gle.com>
Subject: Re: [PATCH v5 1/9] mm/demotion: Add support for explicit memory tiers
On Tue, Jun 7, 2022 at 9:58 PM Aneesh Kumar K V
<aneesh.kumar@...ux.ibm.com> wrote:
>
> On 6/8/22 3:02 AM, Yang Shi wrote:
> > On Fri, Jun 3, 2022 at 6:43 AM Aneesh Kumar K.V
> > <aneesh.kumar@...ux.ibm.com> wrote:
> >>
> >> In the current kernel, memory tiers are defined implicitly via a
> >> demotion path relationship between NUMA nodes, which is created
> >> during the kernel initialization and updated when a NUMA node is
> >> hot-added or hot-removed. The current implementation puts all
> >> nodes with CPU into the top tier, and builds the tier hierarchy
> >> tier-by-tier by establishing the per-node demotion targets based
> >> on the distances between nodes.
> >>
> >> This current memory tier kernel interface needs to be improved for
> >> several important use cases,
> >>
> >> The current tier initialization code always initializes
> >> each memory-only NUMA node into a lower tier. But a memory-only
> >> NUMA node may have a high performance memory device (e.g. a DRAM
> >> device attached via CXL.mem or a DRAM-backed memory-only node on
> >> a virtual machine) and should be put into a higher tier.
> >>
> >> The current tier hierarchy always puts CPU nodes into the top
> >> tier. But on a system with HBM or GPU devices, the
> >> memory-only NUMA nodes mapping these devices should be in the
> >> top tier, and DRAM nodes with CPUs are better to be placed into the
> >> next lower tier.
> >>
> >> With current kernel higher tier node can only be demoted to selected nodes on the
> >> next lower tier as defined by the demotion path, not any other
> >> node from any lower tier. This strict, hard-coded demotion order
> >> does not work in all use cases (e.g. some use cases may want to
> >> allow cross-socket demotion to another node in the same demotion
> >> tier as a fallback when the preferred demotion node is out of
> >> space), This demotion order is also inconsistent with the page
> >> allocation fallback order when all the nodes in a higher tier are
> >> out of space: The page allocation can fall back to any node from
> >> any lower tier, whereas the demotion order doesn't allow that.
> >>
> >> The current kernel also don't provide any interfaces for the
> >> userspace to learn about the memory tier hierarchy in order to
> >> optimize its memory allocations.
> >>
> >> This patch series address the above by defining memory tiers explicitly.
> >>
> >> This patch introduce explicity memory tiers with ranks. The rank
> >> value of a memory tier is used to derive the demotion order between
> >> NUMA nodes. The memory tiers present in a system can be found at
> >>
> >> /sys/devices/system/memtier/memtierN/
> >>
> >> The nodes which are part of a specific memory tier can be listed
> >> via
> >> /sys/devices/system/memtier/memtierN/nodelist
> >>
> >> "Rank" is an opaque value. Its absolute value doesn't have any
> >> special meaning. But the rank values of different memtiers can be
> >> compared with each other to determine the memory tier order.
> >>
> >> For example, if we have 3 memtiers: memtier0, memtier1, memiter2, and
> >> their rank values are 300, 200, 100, then the memory tier order is:
> >> memtier0 -> memtier2 -> memtier1, where memtier0 is the highest tier
> >> and memtier1 is the lowest tier.
> >>
> >> The rank value of each memtier should be unique.
> >>
> >> A higher rank memory tier will appear first in the demotion order
> >> than a lower rank memory tier. ie. while reclaim we choose a node
> >> in higher rank memory tier to demote pages to as compared to a node
> >> in a lower rank memory tier.
> >>
> >> For now we are not adding the dynamic number of memory tiers.
> >> But a future series supporting that is possible. Currently
> >> number of tiers supported is limitted to MAX_MEMORY_TIERS(3).
> >> When doing memory hotplug, if not added to a memory tier, the NUMA
> >> node gets added to DEFAULT_MEMORY_TIER(1).
> >>
> >> This patch is based on the proposal sent by Wei Xu <weixugc@...gle.com> at [1].
> >>
> >> [1] https://lore.kernel.org/linux-mm/CAAPL-u9Wv+nH1VOZTj=9p9S70Y3Qz3+63EkqncRDdHfubsrjfw@mail.gmail.com
> >>
> >> Suggested-by: Wei Xu <weixugc@...gle.com>
> >> Signed-off-by: Jagdish Gediya <jvgediya@...ux.ibm.com>
> >> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@...ux.ibm.com>
> >> ---
> >> include/linux/memory-tiers.h | 20 ++++
> >> mm/Kconfig | 11 ++
> >> mm/Makefile | 1 +
> >> mm/memory-tiers.c | 188 +++++++++++++++++++++++++++++++++++
> >> 4 files changed, 220 insertions(+)
> >> create mode 100644 include/linux/memory-tiers.h
> >> create mode 100644 mm/memory-tiers.c
> >>
> >> diff --git a/include/linux/memory-tiers.h b/include/linux/memory-tiers.h
> >> new file mode 100644
> >> index 000000000000..e17f6b4ee177
> >> --- /dev/null
> >> +++ b/include/linux/memory-tiers.h
> >> @@ -0,0 +1,20 @@
> >> +/* SPDX-License-Identifier: GPL-2.0 */
> >> +#ifndef _LINUX_MEMORY_TIERS_H
> >> +#define _LINUX_MEMORY_TIERS_H
> >> +
> >> +#ifdef CONFIG_TIERED_MEMORY
> >> +
> >> +#define MEMORY_TIER_HBM_GPU 0
> >> +#define MEMORY_TIER_DRAM 1
> >> +#define MEMORY_TIER_PMEM 2
> >> +
> >> +#define MEMORY_RANK_HBM_GPU 300
> >> +#define MEMORY_RANK_DRAM 200
> >> +#define MEMORY_RANK_PMEM 100
> >> +
> >> +#define DEFAULT_MEMORY_TIER MEMORY_TIER_DRAM
> >> +#define MAX_MEMORY_TIERS 3
> >> +
> >> +#endif /* CONFIG_TIERED_MEMORY */
> >> +
> >> +#endif
> >> diff --git a/mm/Kconfig b/mm/Kconfig
> >> index 169e64192e48..08a3d330740b 100644
> >> --- a/mm/Kconfig
> >> +++ b/mm/Kconfig
> >> @@ -614,6 +614,17 @@ config ARCH_ENABLE_HUGEPAGE_MIGRATION
> >> config ARCH_ENABLE_THP_MIGRATION
> >> bool
> >>
> >> +config TIERED_MEMORY
> >> + bool "Support for explicit memory tiers"
> >> + def_bool n
> >> + depends on MIGRATION && NUMA
> >> + help
> >> + Support to split nodes into memory tiers explicitly and
> >> + to demote pages on reclaim to lower tiers. This option
> >> + also exposes sysfs interface to read nodes available in
> >> + specific tier and to move specific node among different
> >> + possible tiers.
> >
> > IMHO we should not need a new kernel config. If tiering is not present
> > then there is just one tier on the system. And tiering is a kind of
> > hardware configuration, the information could be shown regardless of
> > whether demotion/promotion is supported/enabled or not.
> >
>
> This was added so that we could avoid doing multiple
>
> #if defined(CONFIG_MIGRATION) && defined(CONFIG_NUMA)
>
> Initially I had that as def_bool y and depends on MIGRATION && NUMA. But
> it was later suggested that def_bool is not recommended for newer config.
>
> How about
>
> config TIERED_MEMORY
> bool "Support for explicit memory tiers"
> - def_bool n
> - depends on MIGRATION && NUMA
> - help
> - Support to split nodes into memory tiers explicitly and
> - to demote pages on reclaim to lower tiers. This option
> - also exposes sysfs interface to read nodes available in
> - specific tier and to move specific node among different
> - possible tiers.
> + def_bool MIGRATION && NUMA
CONFIG_NUMA should be good enough. Memory tiering doesn't have to mean
demotion/promotion has to be supported IMHO.
>
> config HUGETLB_PAGE_SIZE_VARIABLE
> def_bool n
>
> ie, we just make it a Kconfig variable without exposing it to the user?
>
> -aneesh
Powered by blists - more mailing lists