lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 19 Aug 2020 15:05:46 +0200
From:   Michal Hocko <mhocko@...e.com>
To:     David Hildenbrand <david@...hat.com>
Cc:     linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        Andrew Morton <akpm@...ux-foundation.org>,
        Wei Yang <richard.weiyang@...ux.alibaba.com>,
        Baoquan He <bhe@...hat.com>,
        Pankaj Gupta <pankaj.gupta.linux@...il.com>,
        Oscar Salvador <osalvador@...e.de>,
        Mel Gorman <mgorman@...e.de>
Subject: Re: [PATCH v1 09/11] mm/page_alloc: drop stale pageblock comment in
 memmap_init_zone*()

On Wed 19-08-20 12:11:55, David Hildenbrand wrote:
> Commit ac5d2539b238 ("mm: meminit: reduce number of times pageblocks are
> set during struct page init") moved the actual zone range check, leaving
> only the alignment check for pageblocks.
> 
> Let's drop the stale comment and make the pageblock check easier to read.

I do agree athat IS_ALIGNED is easier to read in this case.

> Cc: Andrew Morton <akpm@...ux-foundation.org>
> Cc: Michal Hocko <mhocko@...e.com>
> Cc: Wei Yang <richard.weiyang@...ux.alibaba.com>
> Cc: Baoquan He <bhe@...hat.com>
> Cc: Pankaj Gupta <pankaj.gupta.linux@...il.com>
> Cc: Oscar Salvador <osalvador@...e.de>
> Cc: Mel Gorman <mgorman@...e.de>
> Signed-off-by: David Hildenbrand <david@...hat.com>

Acked-by: Michal Hocko <mhocko@...e.com>

> ---
>  mm/page_alloc.c | 14 ++------------
>  1 file changed, 2 insertions(+), 12 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 848664352dfe2..5db0b35f95e20 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -6022,13 +6022,8 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone,
>  		 * to reserve their blocks rather than leaking throughout
>  		 * the address space during boot when many long-lived
>  		 * kernel allocations are made.
> -		 *
> -		 * bitmap is created for zone's valid pfn range. but memmap
> -		 * can be created for invalid pages (for alignment)
> -		 * check here not to call set_pageblock_migratetype() against
> -		 * pfn out of zone.
>  		 */
> -		if (!(pfn & (pageblock_nr_pages - 1))) {
> +		if (IS_ALIGNED(pfn, pageblock_nr_pages)) {
>  			set_pageblock_migratetype(page, MIGRATE_MOVABLE);
>  			cond_resched();
>  		}
> @@ -6091,15 +6086,10 @@ void __ref memmap_init_zone_device(struct zone *zone,
>  		 * the address space during boot when many long-lived
>  		 * kernel allocations are made.
>  		 *
> -		 * bitmap is created for zone's valid pfn range. but memmap
> -		 * can be created for invalid pages (for alignment)
> -		 * check here not to call set_pageblock_migratetype() against
> -		 * pfn out of zone.
> -		 *
>  		 * Please note that MEMMAP_HOTPLUG path doesn't clear memmap
>  		 * because this is done early in section_activate()
>  		 */
> -		if (!(pfn & (pageblock_nr_pages - 1))) {
> +		if (IS_ALIGNED(pfn, pageblock_nr_pages)) {
>  			set_pageblock_migratetype(page, MIGRATE_MOVABLE);
>  			cond_resched();
>  		}
> -- 
> 2.26.2
> 

-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ