[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20171201142327.GA16952@ubuntu-xps13>
Date: Fri, 1 Dec 2017 08:23:27 -0600
From: Seth Forshee <seth.forshee@...onical.com>
To: Michal Hocko <mhocko@...nel.org>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: Memory hotplug regression in 4.13
On Mon, Sep 25, 2017 at 02:58:25PM +0200, Michal Hocko wrote:
> On Thu 21-09-17 00:40:34, Seth Forshee wrote:
> > On Wed, Sep 20, 2017 at 11:29:31AM +0200, Michal Hocko wrote:
> > > Hi,
> > > I am currently at a conference so I will most probably get to this next
> > > week but I will try to ASAP.
> > >
> > > On Tue 19-09-17 11:41:14, Seth Forshee wrote:
> > > > Hi Michal,
> > > >
> > > > I'm seeing oopses in various locations when hotplugging memory in an x86
> > > > vm while running a 32-bit kernel. The config I'm using is attached. To
> > > > reproduce I'm using kvm with the memory options "-m
> > > > size=512M,slots=3,maxmem=2G". Then in the qemu monitor I run:
> > > >
> > > > object_add memory-backend-ram,id=mem1,size=512M
> > > > device_add pc-dimm,id=dimm1,memdev=mem1
> > > >
> > > > Not long after that I'll see an oops, not always in the same location
> > > > but most often in wp_page_copy, like this one:
> > >
> > > This is rather surprising. How do you online the memory?
> >
> > The kernel has CONFIG_MEMORY_HOTPLUG_DEFAULT_ONLINE=y.
>
> OK, so the memory gets online automagically at the time when it is
> hotadded. Could you send the full dmesg?
>
> > > > [ 24.673623] BUG: unable to handle kernel paging request at dffff000
> > > > [ 24.675569] IP: wp_page_copy+0xa8/0x660
> > >
> > > could you resolve the IP into the source line?
> >
> > It seems I don't have that kernel anymore, but I've got a 4.14-rc1 build
> > and the problem still occurs there. It's pointing to the call to
> > __builtin_memcpy in memcpy (include/linux/string.h line 340), which we
> > get to via wp_page_copy -> cow_user_page -> copy_user_highpage.
>
> Hmm, this is interesting. That would mean that we have successfully
> mapped the destination page but its memory is still not accessible.
>
> Right now I do not see how the patch you have bisected to could make any
> difference because it only postponed the onlining to be independent but
> your config simply onlines automatically so there shouldn't be any
> semantic change. Maybe there is some sort of off-by-one or something.
>
> I will try to investigate some more. Do you think it would be possible
> to configure kdump on your system and provide me with the vmcore in some
> way?
Sorry, I got busy with other stuff and this kind of fell off my radar.
It came to my attention again recently though.
I was looking through the hotplug rework changes, and I noticed that
32-bit x86 previously was using ZONE_HIGHMEM as a default but after the
rework it doesn't look like it's possible for memory to be associated
with ZONE_HIGHMEM when onlining. So I made the change below against 4.14
and am now no longer seeing the oopses.
I'm sure this isn't the correct fix, but I think it does confirm that
the problem is that the memory should be associated with ZONE_HIGHMEM
but is not.
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index d4b5f29906b9..fddc134c5c3b 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -833,6 +833,12 @@ void __ref move_pfn_range_to_zone(struct zone *zone,
set_zone_contiguous(zone);
}
+#ifdef CONFIG_HIGHMEM
+static enum zone_type default_zone = ZONE_HIGHMEM;
+#else
+static enum zone_type default_zone = ZONE_NORMAL;
+#endif
+
/*
* Returns a default kernel memory zone for the given pfn range.
* If no kernel zone covers this pfn range it will automatically go
@@ -844,14 +850,14 @@ static struct zone *default_kernel_zone_for_pfn(int nid, unsigned long start_pfn
struct pglist_data *pgdat = NODE_DATA(nid);
int zid;
- for (zid = 0; zid <= ZONE_NORMAL; zid++) {
+ for (zid = 0; zid <= default_zone; zid++) {
struct zone *zone = &pgdat->node_zones[zid];
if (zone_intersects(zone, start_pfn, nr_pages))
return zone;
}
- return &pgdat->node_zones[ZONE_NORMAL];
+ return &pgdat->node_zones[default_zone];
}
static inline struct zone *default_zone_for_pfn(int nid, unsigned long start_pfn,
Powered by blists - more mailing lists