lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 15 Jun 2013 16:09:30 +0900
From:	Minchan Kim <minchan@...nel.org>
To:	Heesub Shin <heesub.shin@...sung.com>
Cc:	akpm@...ux-foundation.org, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org, mgorman@...e.de, riel@...hat.com,
	kyungmin.park@...sung.com, d.j.shin@...sung.com,
	sunae.seo@...sung.com
Subject: Re: [PATCH] mm: vmscan: remove redundant querying to shrinker

Hello,

Andrew want to merge this so I try to review.

On Fri, Jun 14, 2013 at 07:07:51PM +0900, Heesub Shin wrote:
> shrink_slab() queries each slab cache to get the number of
> elements in it. In most cases such queries are cheap but,
> on some caches. For example, Android low-memory-killer,
> which is operates as a slab shrinker, does relatively
> long calculation once invoked and it is quite expensive.

long calculation?
I am looking at lowmem_shrink in v3.9 and I couldn't find
anything  which would make slow in case of slab query.
It does have rather unnecessary code with calulating
global vmstat but I think it's not culprit for your slowness.

Could you say how does it makes slow with some number?
If it's true, we have to fix LMK.
We already have been said following as.

 * Note that 'shrink' will be passed nr_to_scan == 0 when the VM is
 * querying the cache size, so a fastpath for that case is appropriate.

> 
> This patch removes redundant queries to shrinker function
> in the loop of shrink batch.
> 
> Signed-off-by: Heesub Shin <heesub.shin@...sung.com>
> ---
>  mm/vmscan.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index fa6a853..11b6695 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -282,9 +282,8 @@ unsigned long shrink_slab(struct shrink_control *shrink,
>  					max_pass, delta, total_scan);
>  
>  		while (total_scan >= batch_size) {
> -			int nr_before;
> +			int nr_before = max_pass;
>  
> -			nr_before = do_shrinker_shrink(shrinker, shrink, 0);
>  			shrink_ret = do_shrinker_shrink(shrinker, shrink,
>  							batch_size);
>  			if (shrink_ret == -1)
> @@ -293,6 +292,7 @@ unsigned long shrink_slab(struct shrink_control *shrink,
>  				ret += nr_before - shrink_ret;
>  			count_vm_events(SLABS_SCANNED, batch_size);
>  			total_scan -= batch_size;
> +			max_pass = shrink_ret;
>  
>  			cond_resched();
>  		}

It might be a good optimization but one of the problem I can see
is cond_resched. If the process is scheduled out and other task
consume more object from the slab, shrink_ret would be obsolete
so that shrink_ret would be greater than nr_before in next iteration.
In such case, we can lose the number of slab objects.

-- 
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ