lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210111140049.GG832698@linux.ibm.com>
Date:   Mon, 11 Jan 2021 16:00:49 +0200
From:   Mike Rapoport <rppt@...ux.ibm.com>
To:     David Hildenbrand <david@...hat.com>
Cc:     Anshuman Khandual <anshuman.khandual@....com>,
        linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
        catalin.marinas@....com, will@...nel.org, ardb@...nel.org,
        Mark Rutland <mark.rutland@....com>,
        James Morse <james.morse@....com>,
        Robin Murphy <robin.murphy@....com>,
        Jérôme Glisse <jglisse@...hat.com>,
        Dan Williams <dan.j.williams@...el.com>,
        Michael Ellerman <michael@...erman.id.au>
Subject: Re: [RFC 1/2] arm64/mm: Fix pfn_valid() for ZONE_DEVICE based memory

On Mon, Jan 11, 2021 at 11:31:02AM +0100, David Hildenbrand wrote:
> On 04.01.21 07:18, Anshuman Khandual wrote:

...

> >> Actually, I think we want to check for partial present sections.
> >>
> >> Maybe we can rather switch to generic pfn_valid() and tweak it to
> >> something like
> >>
> >> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> >> index fb3bf696c05e..7b1fcce5bd5a 100644
> >> --- a/include/linux/mmzone.h
> >> +++ b/include/linux/mmzone.h
> >> @@ -1382,9 +1382,13 @@ static inline int pfn_valid(unsigned long pfn)
> >>                 return 0;
> >>         /*
> >>          * Traditionally early sections always returned pfn_valid() for
> >> -        * the entire section-sized span.
> >> +        * the entire section-sized span. Some archs might have holes in
> >> +        * early sections, so double check with memblock if configured.
> >>          */
> >> -       return early_section(ms) || pfn_section_valid(ms, pfn);
> >> +       if (early_section(ms))
> >> +               return IS_ENABLED(CONFIG_EARLY_SECTION_MEMMAP_HOLES) ?
> >> +                      memblock_is_map_memory(pfn << PAGE_SHIFT) : 1;
> >> +       return pfn_section_valid(ms, pfn);
> >>  }
> >>  #endif
> > 
> > Could not find CONFIG_EARLY_SECTION_MEMMAP_HOLES. Are you suggesting to
> > create this config which could track platform scenarios where all early
> > sections might not have mmap coverage such as arm64 ?
> 
> Yes, a new config that states what's actually happening.

Just to clarify, there are several architectures that may free parts of
memmap with FLATMEM/SPARSEMEM and this functionality is gated by
CONFIG_HAVE_ARCH_PFN_VALID.

So if arm64 is to use a generic version, this should be also taken into
account.

> >> Which users are remaining that require us to add/remove memblocks when
> >> hot(un)plugging memory
> >>
> >>  $ git grep KEEP_MEM | grep memory_hotplug
> >> mm/memory_hotplug.c:    if (IS_ENABLED(CONFIG_ARCH_KEEP_MEMBLOCK))
> >> mm/memory_hotplug.c:    if (IS_ENABLED(CONFIG_ARCH_KEEP_MEMBLOCK))
> >> mm/memory_hotplug.c:    if (IS_ENABLED(CONFIG_ARCH_KEEP_MEMBLOCK)) {
> > 
> > Did not follow, do we want to drop ARCH_KEEP_MEMBLOCK ? Without it arm64
> > will not be able to track MEMBLOCK_NOMAP memory at runtime.
> 
> I'd only like the hot(un)plug parts gone for now, if possible: I don't
> see the need for that handling really that cannot be handled easier, as
> in the proposed pfn_valid() changes.
> 
> I understand that current handling of memory holes in early sections and
> memory marked as MEMBLOCK_NOMAP requires ARCH_KEEP_MEMBLOCK for now.
 
This is mostly about the holes in the memory map. I believe it does not
matter if that memory is NOMAP or not, the holes in the memmap are punched
for anything in memblock.memory.

It seems to me that for arm64's pfn_valid() we can safely replace
memblock_is_map_memory() with memblock_is_memory(), not that it would
change much.

> >> I think one user we would have to handle is
> >> arch/arm64/mm/mmap.c:valid_phys_addr_range(). AFAIS, powerpc at least
> >> does not rely on memblock_is_map_memory.
> > 
> > memblock_is_map_memory() is currently used only on arm/arm64 platforms.
> > Apart from the above example in valid_phys_addr_range(), there are some
> > other memblock_is_map_memory() call sites as well. But then, we are not
> > trying to completely drop memblock_is_map_memory() or are we ?
> 
> No, just change the semantics: only relevant for early sections. Imagine
> freezing MEMBLOCK state after boot.
> 
> Only early sections can have memory holes and might be marked
> MEMBLOCK_NOMAP. For hotplugged memory, we don't have to call
> memblock_is_map_memory().
>
> -- 
> Thanks,
> 
> David / dhildenb

-- 
Sincerely yours,
Mike.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ