lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZgRHgL1zbQc2DJlc@kernel.org>
Date: Wed, 27 Mar 2024 18:21:20 +0200
From: Mike Rapoport <rppt@...nel.org>
To: Baoquan He <bhe@...hat.com>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org, x86@...nel.org,
	linuxppc-dev@...ts.ozlabs.org, akpm@...ux-foundation.org
Subject: Re: [PATCH v2 5/6] mm/mm_init.c: remove unneeded calc_memmap_size()

On Mon, Mar 25, 2024 at 10:56:45PM +0800, Baoquan He wrote:
> Nobody calls calc_memmap_size() now.
> 
> Signed-off-by: Baoquan He <bhe@...hat.com>

Reviewed-by: Mike Rapoport (IBM) <rppt@...nel.org>

Looks like I replied to patch 6/6 twice by mistake and missed this one.

> ---
>  mm/mm_init.c | 20 --------------------
>  1 file changed, 20 deletions(-)
> 
> diff --git a/mm/mm_init.c b/mm/mm_init.c
> index 7f71e56e83f3..e269a724f70e 100644
> --- a/mm/mm_init.c
> +++ b/mm/mm_init.c
> @@ -1331,26 +1331,6 @@ static void __init calculate_node_totalpages(struct pglist_data *pgdat,
>  	pr_debug("On node %d totalpages: %lu\n", pgdat->node_id, realtotalpages);
>  }
>  
> -static unsigned long __init calc_memmap_size(unsigned long spanned_pages,
> -						unsigned long present_pages)
> -{
> -	unsigned long pages = spanned_pages;
> -
> -	/*
> -	 * Provide a more accurate estimation if there are holes within
> -	 * the zone and SPARSEMEM is in use. If there are holes within the
> -	 * zone, each populated memory region may cost us one or two extra
> -	 * memmap pages due to alignment because memmap pages for each
> -	 * populated regions may not be naturally aligned on page boundary.
> -	 * So the (present_pages >> 4) heuristic is a tradeoff for that.
> -	 */
> -	if (spanned_pages > present_pages + (present_pages >> 4) &&
> -	    IS_ENABLED(CONFIG_SPARSEMEM))
> -		pages = present_pages;
> -
> -	return PAGE_ALIGN(pages * sizeof(struct page)) >> PAGE_SHIFT;
> -}
> -
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>  static void pgdat_init_split_queue(struct pglist_data *pgdat)
>  {
> -- 
> 2.41.0
> 

-- 
Sincerely yours,
Mike.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ