lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200110144717.xufpf4yjkjlngymy@box>
Date:   Fri, 10 Jan 2020 17:47:17 +0300
From:   "Kirill A. Shutemov" <kirill@...temov.name>
To:     David Hildenbrand <david@...hat.com>
Cc:     Michal Hocko <mhocko@...nel.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Dan Williams <dan.j.williams@...el.com>,
        Vlastimil Babka <vbabka@...e.cz>, Mel Gorman <mgorman@...e.de>,
        "Jin, Zhi" <zhi.jin@...el.com>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org,
        "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
Subject: Re: [PATCH] mm/page_alloc: Skip non present sections on zone
 initialization

On Fri, Jan 10, 2020 at 03:34:49PM +0100, David Hildenbrand wrote:
> On 10.01.20 14:45, Kirill A. Shutemov wrote:
> > On Fri, Jan 10, 2020 at 02:15:26PM +0100, David Hildenbrand wrote:
> >> On 08.01.20 15:40, Michal Hocko wrote:
> >>> On Mon 30-12-19 12:38:28, Kirill A. Shutemov wrote:
> >>>> memmap_init_zone() can be called on the ranges with holes during the
> >>>> boot. It will skip any non-valid PFNs one-by-one. It works fine as long
> >>>> as holes are not too big.
> >>>>
> >>>> But huge holes in the memory map causes a problem. It takes over 20
> >>>> seconds to walk 32TiB hole. x86-64 with 5-level paging allows for much
> >>>> larger holes in the memory map which would practically hang the system.
> >>>>
> >>>> Deferred struct page init doesn't help here. It only works on the
> >>>> present ranges.
> >>>>
> >>>> Skipping non-present sections would fix the issue.
> >>>
> >>> Makes sense to me.
> >>>
> >>>> Signed-off-by: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
> >>>
> >>> That pfn inc back and forth is quite ugly TBH but whatever.
> >>
> >> Indeed, can we please rewrite the loop to fix that?
> > 
> > Any suggestions?
> > 
> > I don't see an obvious way to not break readablity in another place.
> > 
> 
> I'd probably do it like this (applied some other tweaks, untested)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index cb766aac6772..a96b1ad1d74b 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -5859,6 +5859,22 @@ overlap_memmap_init(unsigned long zone, unsigned long *pfn)
>         return false;
>  }
>  
> +static inline __meminit unsigned long next_present_pfn(unsigned long pfn)
> +{
> +#ifdef CONFIG_SPARSEMEM

I would rather keep it around function, but it's matter of taste.

> +       unsigned long section_nr = pfn_to_section_nr(pfn + 1);
> +
> +       /*
> +        * Note: We don't check the subsection bitmap, so this can produce
> +        * false positives when only subsections are present/valid. The
> +        * caller should recheck if the returned pfn is valid.
> +        */
> +       if (!present_section_nr(section_nr))
> +               return section_nr_to_pfn(next_present_section_nr(section_nr));

This won't compile. next_present_section_nr() is static to mm/sparse.c.

> +#endif
> +       return pfn++;
> +}
> +
>  /*
>   * Initially all pages are reserved - free ones are freed
>   * up by memblock_free_all() once the early boot process is
> @@ -5892,18 +5908,22 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone,
>         }
>  #endif
>  
> -       for (pfn = start_pfn; pfn < end_pfn; pfn++) {
> +       pfn = start_pfn;
> +       while (pfn < end_pfn) {
>                 /*
>                  * There can be holes in boot-time mem_map[]s handed to this
>                  * function.  They do not exist on hotplugged memory.
>                  */
>                 if (context == MEMMAP_EARLY) {
> -                       if (!early_pfn_valid(pfn))
> +                       if (!early_pfn_valid(pfn)) {
> +                               pfn = next_present_pfn(pfn, end_pfn);
>                                 continue;
> -                       if (!early_pfn_in_nid(pfn, nid))
> -                               continue;
> -                       if (overlap_memmap_init(zone, &pfn))
> +                       }
> +                       if (!early_pfn_in_nid(pfn, nid) ||
> +                           overlap_memmap_init(zone, &pfn)) {
> +                               pfn++;
>                                 continue;
> +                       }
>                         if (defer_init(nid, pfn, end_pfn))
>                                 break;
>                 }
> @@ -5929,6 +5949,7 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone,
>                         set_pageblock_migratetype(page, MIGRATE_MOVABLE);
>                         cond_resched();
>                 }
> +               pfn++;
>         }
> 
> 
> I played with using a "pfn = next_init_pfn()" in the for loop instead, moving all
> the checks in there, but didn't turn out too well.

Well, it's better than I thought, but... I'm fine either way.

-- 
 Kirill A. Shutemov

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ