[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220128041959.GA20345@swarm08>
Date: Fri, 28 Jan 2022 13:19:59 +0900
From: Jonghyeon Kim <tome01@...u.ac.kr>
To: David Hildenbrand <david@...hat.com>
Cc: dan.j.williams@...el.com, vishal.l.verma@...el.com,
dave.jiang@...el.com, akpm@...ux-foundation.org,
nvdimm@...ts.linux.dev, linux-kernel@...r.kernel.org,
linux-mm@...ck.org
Subject: Re: [PATCH 1/2] mm/memory_hotplug: Export shrink span functions for
zone and node
On Thu, Jan 27, 2022 at 10:54:23AM +0100, David Hildenbrand wrote:
> On 27.01.22 10:41, Jonghyeon Kim wrote:
> > On Wed, Jan 26, 2022 at 06:04:50PM +0100, David Hildenbrand wrote:
> >> On 26.01.22 18:00, Jonghyeon Kim wrote:
> >>> Export shrink_zone_span() and update_pgdat_span() functions to head
> >>> file. We need to update real number of spanned pages for NUMA nodes and
> >>> zones when we add memory device node such as device dax memory.
> >>>
> >>
> >> Can you elaborate a bit more what you intend to fix?
> >>
> >> Memory onlining/offlining is reponsible for updating the node/zone span,
> >> and that's triggered when the dax/kmem mamory gets onlined/offlined.
> >>
> > Sure, sorry for the lack of explanation of the intended fix.
> >
> > Before onlining nvdimm memory using dax(devdax or fsdax), these memory belong to
> > cpu NUMA nodes, which extends span pages of node/zone as a ZONE_DEVICE. So there
> > is no problem because node/zone contain these additional non-visible memory
> > devices to the system.
> > But, if we online dax-memory, zone[ZONE_DEVICE] of CPU NUMA node is hot-plugged
> > to new NUMA node(but CPU-less). I think there is no need to hold
> > zone[ZONE_DEVICE] pages on the original node.
> >
> > Additionally, spanned pages are also used to calculate the end pfn of a node.
> > Thus, it is needed to maintain accurate page stats for node/zone.
> >
> > My machine contains two CPU-socket consisting of DRAM and Intel DCPMM
> > (DC persistent memory modules) with App-Direct mode.
> >
> > Below are my test results.
> >
> > Before memory onlining:
> >
> > # ndctl create-namespace --mode=devdax
> > # ndctl create-namespace --mode=devdax
> > # cat /proc/zoneinfo | grep -E "Node|spanned" | paste - -
> > Node 0, zone DMA spanned 4095
> > Node 0, zone DMA32 spanned 1044480
> > Node 0, zone Normal spanned 7864320
> > Node 0, zone Movable spanned 0
> > Node 0, zone Device spanned 66060288
> > Node 1, zone DMA spanned 0
> > Node 1, zone DMA32 spanned 0
> > Node 1, zone Normal spanned 8388608
> > Node 1, zone Movable spanned 0
> > Node 1, zone Device spanned 66060288
> >
> > After memory onlining:
> >
> > # daxctl reconfigure-device --mode=system-ram --no-online dax0.0
> > # daxctl reconfigure-device --mode=system-ram --no-online dax1.0
> >
> > # cat /proc/zoneinfo | grep -E "Node|spanned" | paste - -
> > Node 0, zone DMA spanned 4095
> > Node 0, zone DMA32 spanned 1044480
> > Node 0, zone Normal spanned 7864320
> > Node 0, zone Movable spanned 0
> > Node 0, zone Device spanned 66060288
> > Node 1, zone DMA spanned 0
> > Node 1, zone DMA32 spanned 0
> > Node 1, zone Normal spanned 8388608
> > Node 1, zone Movable spanned 0
> > Node 1, zone Device spanned 66060288
> > Node 2, zone DMA spanned 0
> > Node 2, zone DMA32 spanned 0
> > Node 2, zone Normal spanned 65011712
> > Node 2, zone Movable spanned 0
> > Node 2, zone Device spanned 0
> > Node 3, zone DMA spanned 0
> > Node 3, zone DMA32 spanned 0
> > Node 3, zone Normal spanned 65011712
> > Node 3, zone Movable spanned 0
> > Node 3, zone Device spanned 0
> >
> > As we can see, Node 0 and 1 still have zone_device pages after memory onlining.
> > This causes problem that Node 0 and Node 2 have same end of pfn values, also
> > Node 1 and Node 3 have same problem.
>
> Thanks for the information, that makes it clearer.
>
> While this unfortunate, the node/zone span is something fairly
> unreliable/unusable for user space. Nodes and zones can overlap just easily.
>
> What counts are present/managed pages in the node/zone.
>
> So at least I don't count this as something that "needs fixing",
> it's more something that's nice to handle better if easily possible.
>
> See below.
>
> >
> >>> Signed-off-by: Jonghyeon Kim <tome01@...u.ac.kr>
> >>> ---
> >>> include/linux/memory_hotplug.h | 3 +++
> >>> mm/memory_hotplug.c | 6 ++++--
> >>> 2 files changed, 7 insertions(+), 2 deletions(-)
> >>>
> >>> diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h
> >>> index be48e003a518..25c7f60c317e 100644
> >>> --- a/include/linux/memory_hotplug.h
> >>> +++ b/include/linux/memory_hotplug.h
> >>> @@ -337,6 +337,9 @@ extern void move_pfn_range_to_zone(struct zone *zone, unsigned long start_pfn,
> >>> extern void remove_pfn_range_from_zone(struct zone *zone,
> >>> unsigned long start_pfn,
> >>> unsigned long nr_pages);
> >>> +extern void shrink_zone_span(struct zone *zone, unsigned long start_pfn,
> >>> + unsigned long end_pfn);
> >>> +extern void update_pgdat_span(struct pglist_data *pgdat);
> >>> extern bool is_memblock_offlined(struct memory_block *mem);
> >>> extern int sparse_add_section(int nid, unsigned long pfn,
> >>> unsigned long nr_pages, struct vmem_altmap *altmap);
> >>> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> >>> index 2a9627dc784c..38f46a9ef853 100644
> >>> --- a/mm/memory_hotplug.c
> >>> +++ b/mm/memory_hotplug.c
> >>> @@ -389,7 +389,7 @@ static unsigned long find_biggest_section_pfn(int nid, struct zone *zone,
> >>> return 0;
> >>> }
> >>>
> >>> -static void shrink_zone_span(struct zone *zone, unsigned long start_pfn,
> >>> +void shrink_zone_span(struct zone *zone, unsigned long start_pfn,
> >>> unsigned long end_pfn)
> >>> {
> >>> unsigned long pfn;
> >>> @@ -428,8 +428,9 @@ static void shrink_zone_span(struct zone *zone, unsigned long start_pfn,
> >>> }
> >>> }
> >>> }
> >>> +EXPORT_SYMBOL_GPL(shrink_zone_span);
> >>
> >> Exporting both as symbols feels very wrong. This is memory
> >> onlining/offlining internal stuff.
> >
> > I agree with you that your comment. I will find another approach to avoid
> > directly using onlining/offlining internal stuff while updating node/zone span.
>
> IIRC, to handle what you intend to handle properly want to look into teaching
> remove_pfn_range_from_zone() to handle zone_is_zone_device().
>
> There is a big fat comment:
>
> /*
> * Zone shrinking code cannot properly deal with ZONE_DEVICE. So
> * we will not try to shrink the zones - which is okay as
> * set_zone_contiguous() cannot deal with ZONE_DEVICE either way.
> */
> if (zone_is_zone_device(zone))
> return;
>
>
> Similarly, try_offline_node() spells this out:
>
> /*
> * If the node still spans pages (especially ZONE_DEVICE), don't
> * offline it. A node spans memory after move_pfn_range_to_zone(),
> * e.g., after the memory block was onlined.
> */
> if (pgdat->node_spanned_pages)
> return;
>
>
> So once you handle remove_pfn_range_from_zone() cleanly, you'll cleanly handle
> try_offline_node() implicitly.
>
> Trying to update the node span manually without teaching node/zone shrinking code how to
> handle ZONE_DEVICE properly is just a hack that will only sometimes work. Especially, it
> won't work if the range of interest is still surrounded by other ranges.
>
Thanks for your pointing out, I missed those comments.
I will keep trying to handle node/zone span updating process.
> --
> Thanks,
>
> David / dhildenb
>
Powered by blists - more mailing lists