[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGM2reZuGAPmfO8x0TnHnqHci_Hsga3-CfM9+udJs=gUQCw-1g@mail.gmail.com>
Date: Mon, 10 Sep 2018 14:11:45 +0000
From: Pasha Tatashin <Pavel.Tatashin@...rosoft.com>
To: "mhocko@...nel.org" <mhocko@...nel.org>
CC: "zaslonko@...ux.ibm.com" <zaslonko@...ux.ibm.com>,
Andrew Morton <akpm@...ux-foundation.org>,
LKML <linux-kernel@...r.kernel.org>,
Linux Memory Management List <linux-mm@...ck.org>,
"osalvador@...e.de" <osalvador@...e.de>,
"gerald.schaefer@...ibm.com" <gerald.schaefer@...ibm.com>
Subject: Re: [PATCH] memory_hotplug: fix the panic when memory end is not on
the section boundary
Hi Michal,
It is tricky, but probably can be done. Either change
memmap_init_zone() or its caller to also cover the ends and starts of
unaligned sections to initialize and reserve pages.
The same thing would also need to be done in deferred_init_memmap() to
cover the deferred init case.
For hotplugged memory we do not need to worry, as that one is always
section aligned.
Do you think this would be a better approach?
Thank you,
Pavel
On Mon, Sep 10, 2018 at 10:00 AM Michal Hocko <mhocko@...nel.org> wrote:
>
> On Mon 10-09-18 13:46:45, Pavel Tatashin wrote:
> >
> >
> > On 9/10/18 9:17 AM, Michal Hocko wrote:
> > > [Cc Pavel]
> > >
> > > On Mon 10-09-18 14:35:27, Mikhail Zaslonko wrote:
> > >> If memory end is not aligned with the linux memory section boundary, such
> > >> a section is only partly initialized. This may lead to VM_BUG_ON due to
> > >> uninitialized struct pages access from is_mem_section_removable() or
> > >> test_pages_in_a_zone() function.
> > >>
> > >> Here is one of the panic examples:
> > >> CONFIG_DEBUG_VM_PGFLAGS=y
> > >> kernel parameter mem=3075M
> > >
> > > OK, so the last memory section is not full and we have a partial memory
> > > block right?
> > >
> > >> page dumped because: VM_BUG_ON_PAGE(PagePoisoned(p))
> > >
> > > OK, this means that the struct page is not fully initialized. Do you
> > > have a specific place which has triggered this assert?
> > >
> > >> ------------[ cut here ]------------
> > >> Call Trace:
> > >> ([<000000000039b8a4>] is_mem_section_removable+0xcc/0x1c0)
> > >> [<00000000009558ba>] show_mem_removable+0xda/0xe0
> > >> [<00000000009325fc>] dev_attr_show+0x3c/0x80
> > >> [<000000000047e7ea>] sysfs_kf_seq_show+0xda/0x160
> > >> [<00000000003fc4e0>] seq_read+0x208/0x4c8
> > >> [<00000000003cb80e>] __vfs_read+0x46/0x180
> > >> [<00000000003cb9ce>] vfs_read+0x86/0x148
> > >> [<00000000003cc06a>] ksys_read+0x62/0xc0
> > >> [<0000000000c001c0>] system_call+0xdc/0x2d8
> > >>
> > >> This fix checks if the page lies within the zone boundaries before
> > >> accessing the struct page data. The check is added to both functions.
> > >> Actually similar check has already been present in
> > >> is_pageblock_removable_nolock() function but only after the struct page
> > >> is accessed.
> > >>
> > >
> > > Well, I am afraid this is not the proper solution. We are relying on the
> > > full pageblock worth of initialized struct pages at many other place. We
> > > used to do that in the past because we have initialized the full
> > > section but this has been changed recently. Pavel, do you have any ideas
> > > how to deal with this partial mem sections now?
> >
> > We have:
> >
> > remove_memory()
> > BUG_ON(check_hotplug_memory_range(start, size))
> >
> > That supposed to safely check for this condition: if [start, start +
> > size) not block size aligned (and we know block size is section
> > aligned), hot remove is not allowed. The problem is this check is late,
> > and only happens when invalid range has already passed through previous
> > checks.
> >
> > We could add check_hotplug_memory_range() to is_mem_section_removable():
> >
> > is_mem_section_removable(start_pfn, nr_pages)
> > if (check_hotplug_memory_range(PFN_PHYS(start_pfn), PFN_PHYS(nr_pages)))
> > return false;
> >
> > I think it should work.
>
> I do not think we want to sprinkle these tests over all pfn walkers. Can
> we simply initialize those uninitialized holes as well and make them
> reserved without handing them over to the page allocator? That would be
> much more robust approach IMHO.
> --
> Michal Hocko
> SUSE Labs
>
Powered by blists - more mailing lists