lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 9 Dec 2016 15:52:34 +0100
From:   Robert Richter <robert.richter@...ium.com>
To:     Yisheng Xie <xieyisheng1@...wei.com>
CC:     Hanjun Guo <hanjun.guo@...aro.org>,
        Ard Biesheuvel <ard.biesheuvel@...aro.org>,
        Catalin Marinas <catalin.marinas@....com>,
        Will Deacon <will.deacon@....com>,
        David Daney <david.daney@...ium.com>,
        Mark Rutland <mark.rutland@....com>,
        "linux-arm-kernel@...ts.infradead.org" 
        <linux-arm-kernel@...ts.infradead.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2] arm64: mm: Fix memmap to be initialized for the
 entire section

On 09.12.16 21:15:12, Yisheng Xie wrote:
> For invalid pages, their zone and node information is not initialized, and it
> do have risk to trigger the BUG_ON, so I have a silly question,
> why not just change the BUG_ON:

We need to get the page handling correct. Modifying the BUG_ON() just
hides that something is wrong.

-Robert

> -----------
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 6de9440..af199b8 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1860,12 +1860,13 @@ int move_freepages(struct zone *zone,
>          * Remove at a later date when no bug reports exist related to
>          * grouping pages by mobility
>          */
> -       VM_BUG_ON(page_zone(start_page) != page_zone(end_page));
> +       VM_BUG_ON(early_pfn_valid(start_page) && early_pfn_valid(end_page) &&
> +                       page_zone(start_page) != page_zone(end_page));
>  #endif
> 
>         for (page = start_page; page <= end_page;) {
>                 /* Make sure we are not inadvertently changing nodes */
> -               VM_BUG_ON_PAGE(page_to_nid(page) != zone_to_nid(zone), page);
> +               VM_BUG_ON_PAGE(early_pfn_valid(page) && (page_to_nid(page) != zone_to_nid(zone)), page);
> 
>                 if (!pfn_valid_within(page_to_pfn(page))) {
>                         page++;

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ