[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20191122100913.264411908@linuxfoundation.org>
Date: Fri, 22 Nov 2019 11:26:13 +0100
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, David Hildenbrand <david@...hat.com>,
Michal Hocko <mhocko@...e.com>,
Oscar Salvador <osalvador@...e.de>,
Stephen Rothwell <sfr@...b.auug.org.au>,
Dan Williams <dan.j.williams@...el.com>,
Pavel Tatashin <pasha.tatashin@...een.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Linus Torvalds <torvalds@...ux-foundation.org>
Subject: [PATCH 4.19 008/220] mm/memory_hotplug: fix updating the node span
From: David Hildenbrand <david@...hat.com>
commit 656d571193262a11c2daa4012e53e4d645bbce56 upstream.
We recently started updating the node span based on the zone span to
avoid touching uninitialized memmaps.
Currently, we will always detect the node span to start at 0, meaning a
node can easily span too many pages. pgdat_is_empty() will still work
correctly if all zones span no pages. We should skip over all zones
without spanned pages and properly handle the first detected zone that
spans pages.
Unfortunately, in contrast to the zone span (/proc/zoneinfo), the node
span cannot easily be inspected and tested. The node span gives no real
guarantees when an architecture supports memory hotplug, meaning it can
easily contain holes or span pages of different nodes.
The node span is not really used after init on architectures that
support memory hotplug.
E.g., we use it in mm/memory_hotplug.c:try_offline_node() and in
mm/kmemleak.c:kmemleak_scan(). These users seem to be fine.
Link: http://lkml.kernel.org/r/20191027222714.5313-1-david@redhat.com
Fixes: 00d6c019b5bc ("mm/memory_hotplug: don't access uninitialized memmaps in shrink_pgdat_span()")
Signed-off-by: David Hildenbrand <david@...hat.com>
Cc: Michal Hocko <mhocko@...e.com>
Cc: Oscar Salvador <osalvador@...e.de>
Cc: Stephen Rothwell <sfr@...b.auug.org.au>
Cc: Dan Williams <dan.j.williams@...el.com>
Cc: Pavel Tatashin <pasha.tatashin@...een.com>
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
Signed-off-by: Andrew Morton <akpm@...ux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@...ux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
mm/memory_hotplug.c | 8 ++++++++
1 file changed, 8 insertions(+)
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -448,6 +448,14 @@ static void update_pgdat_span(struct pgl
zone->spanned_pages;
/* No need to lock the zones, they can't change. */
+ if (!zone->spanned_pages)
+ continue;
+ if (!node_end_pfn) {
+ node_start_pfn = zone->zone_start_pfn;
+ node_end_pfn = zone_end_pfn;
+ continue;
+ }
+
if (zone_end_pfn > node_end_pfn)
node_end_pfn = zone_end_pfn;
if (zone->zone_start_pfn < node_start_pfn)
Powered by blists - more mailing lists