[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPcyv4iN4t2P_rQS23E7Bb-eLUAt389Y5t4X-yoRQrxvsN3DWQ@mail.gmail.com>
Date: Wed, 6 Jan 2021 12:02:49 -0800
From: Dan Williams <dan.j.williams@...el.com>
To: David Hildenbrand <david@...hat.com>
Cc: Michal Hocko <mhocko@...e.com>, Linux MM <linux-mm@...ck.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] mm: Teach pfn_to_online_page() about ZONE_DEVICE section collisions
On Wed, Jan 6, 2021 at 3:23 AM David Hildenbrand <david@...hat.com> wrote:
>
> On 06.01.21 11:42, Michal Hocko wrote:
> > On Wed 06-01-21 10:56:19, David Hildenbrand wrote:
> > [...]
> >> Note that this is not sufficient in the general case. I already
> >> mentioned that we effectively override an already initialized memmap.
> >>
> >> ---
> >>
> >> [ SECTION ]
> >> Before:
> >> [ ZONE_NORMAL ][ Hole ]
> >>
> >> The hole has some node/zone (currently 0/0, discussions ongoing on how
> >> to optimize that to e.g., ZONE_NORMAL in this example) and is
> >> PG_reserved - looks like an ordinary memory hole.
> >>
> >> After memremap:
> >> [ ZONE_NORMAL ][ ZONE_DEVICE ]
> >>
> >> The already initialized memmap was converted to ZONE_DEVICE. Your
> >> slowpath will work.
> >>
> >> After memunmap (no poisioning):
> >> [ ZONE_NORMAL ][ ZONE_DEVICE ]
> >>
> >> The slow path is no longer working. pfn_to_online_page() might return
> >> something that is ZONE_DEVICE.
> >>
> >> After memunmap (poisioning):
> >> [ ZONE_NORMAL ][ POISONED ]
> >>
> >> The slow path is no longer working. pfn_to_online_page() might return
> >> something that will BUG_ON via page_to_nid() etc.
> >>
> >> ---
> >>
> >> Reason is that pfn_to_online_page() does no care about sub-sections. And
> >> for now, it didn't had to. If there was an online section, it either was
> >>
> >> a) Completely present. The whole memmap is initialized to sane values.
> >> b) Partially present. The whole memmap is initialized to sane values.
> >>
> >> memremap/memunmap messes with case b)
> >
> > I do not see we ever clear the newly added flag and my understanding is
> > that the subsection removed would lead to get_dev_pagemap returning a
> > NULL. Which would obviously need to be checked for pfn_to_online_page.
> > Or do I miss anything and the above is not the case and we could still
> > get false positives?
>
> See my example above ("After memunmap").
>
> We're still in the slow pathg. pfn_to_online_page() will return a struct
> page as get_dev_pagemap() is now NULL.
>
> Yet page_zone(page) will either
> - BUG_ON (memmap was poisoned)
> - return ZONE_DEVICE zone (memmap not poisoned when memunmapping)
>
> As I said, can be tackled by checking for pfn_section_valid() at least
> on the slow path. Ideally also on the fast path.
Good eye, I glazed over that the existing pfn_section_valid() check in
pfn_valid() is obviated by early_section(). I'll respin with a
standalone pfn_section_valid() gate in pfn_to_online_page().
>
> >
> >> Well have to further tweak pfn_to_online_page(). You'll have to also
> >> check pfn_section_valid() *at least* on the slow path. Less-hacky would
> >> be checking it also in the "somehwat-faster" path - that would cover
> >> silently overriding a memmap that's visible via pfn_to_online_page().
> >> Might slow down things a bit.
> >>
> >>
> >> Not completely opposed to this, but I would certainly still prefer just
> >> avoiding this corner case completely instead of patching around it. Thanks!
> >
> > Well, I would love to have no surprises either. So far there was not
> > actual argument why the pmem reserved space cannot be fully initialized.
>
> Yes, I'm still hoping Dan can clarify that.
Complexity and effective utility (once pfn_to_online_page() is fixed)
are the roadblocks in my mind. The altmap is there to allow for PMEM
capacity to be used as memmap space, so there would need to be code to
break that circular dependency and allocate a memmap for the metadata
space from DRAM and the rest of the memmap space for the data capacity
from pmem itself. That memmap-for-pmem-metadata will still represent
offline pages. So once pfn_to_online_page() is fixed, what pfn-walker
is going to be doing pfn_to_page() on PMEM metadata? Secondly, there
is a PMEM namespace mode called "raw" that eschews DAX and 'struct
page' for pmem and just behaves like a memory-backed block device. The
end result is still that pfn walkers need to discover if a PMEM pfn
has a page, so I don't see what "sometimes there's an
memmap-for-pmem-metadata" buys us?
>
> > On the other hand making sure that pfn_to_online_page sounds like the
> > right thing to do. And having an explicit check for zone device there in
> > a slow path makes sense to me.
>
> As I said, I'd favor to simplify and just get rid of the special case,
> instead of coming up with increasingly complex ways to deal with it.
> pfn_to_online_page() used to be simple, essentially checking a single
> flag was sufficient in most setups.
I think the logic to throw away System RAM that might collide with
PMEM and soft-reserved memory within a section is on the order of the
same code complexity as the patch proposed here, no? Certainly the
throw-away concept itself is easier to grasp, but I don't think that
would be reflected in the code patch to achieve it... willing to be
proved wrong with a patch.
Powered by blists - more mailing lists