[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180228000319.GD168047@rodete-desktop-imager.corp.google.com>
Date:   Wed, 28 Feb 2018 09:03:19 +0900
From:   Minchan Kim <minchan@...nel.org>
To:     Joey Pabalinas <joeypabalinas@...il.com>
Cc:     linux-mm@...ck.org, Nitin Gupta <ngupta@...are.org>,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm/zsmalloc: strength reduce zspage_size calculation
Hi Joey,
On Mon, Feb 26, 2018 at 02:21:26AM -1000, Joey Pabalinas wrote:
> Replace the repeated multiplication in the main loop
> body calculation of zspage_size with an equivalent
> (and cheaper) addition operation.
> 
> Signed-off-by: Joey Pabalinas <joeypabalinas@...il.com>
> 
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> index c3013505c30527dc42..647a1a2728634b5194 100644
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -821,15 +821,15 @@ static enum fullness_group fix_fullness_group(struct size_class *class,
>   */
>  static int get_pages_per_zspage(int class_size)
>  {
> +	int zspage_size = 0;
>  	int i, max_usedpc = 0;
>  	/* zspage order which gives maximum used size per KB */
>  	int max_usedpc_order = 1;
>  
>  	for (i = 1; i <= ZS_MAX_PAGES_PER_ZSPAGE; i++) {
> -		int zspage_size;
>  		int waste, usedpc;
>  
> -		zspage_size = i * PAGE_SIZE;
> +		zspage_size += PAGE_SIZE;
>  		waste = zspage_size % class_size;
>  		usedpc = (zspage_size - waste) * 100 / zspage_size;
>  
Thanks for the patch! However, it's used only zs_create_pool which
is really cold path so I don't feel it would improve for real practice.
Thanks.
Powered by blists - more mailing lists
 
