lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 15 Dec 2020 14:23:37 +1100
From:   Dave Chinner <david@...morbit.com>
To:     Yang Shi <shy828301@...il.com>
Cc:     guro@...com, ktkhai@...tuozzo.com, shakeelb@...gle.com,
        hannes@...xchg.org, mhocko@...e.com, akpm@...ux-foundation.org,
        linux-mm@...ck.org, linux-fsdevel@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [v2 PATCH 9/9] mm: vmscan: shrink deferred objects proportional
 to priority

On Mon, Dec 14, 2020 at 02:37:22PM -0800, Yang Shi wrote:
> The number of deferred objects might get windup to an absurd number, and it results in
> clamp of slab objects.  It is undesirable for sustaining workingset.
> 
> So shrink deferred objects proportional to priority and cap nr_deferred to twice of
> cache items.

This completely changes the work accrual algorithm without any
explaination of how it works, what the theory behind the algorithm
is, what the work accrual ramp up and damp down curve looks like,
what workloads it is designed to benefit, how it affects page
cache vs slab cache balance and system performance, what OOM stress
testing has been done to ensure pure slab cache pressure workloads
don't easily trigger OOM kills, etc.

You're going to need a lot more supporting evidence that this is a
well thought out algorithm that doesn't obviously introduce
regressions. The current code might fall down in one corner case,
but there are an awful lot of corner cases where it does work.
Please provide some evidence that it not only works in your corner
case, but also doesn't introduce regressions for other slab cache
intensive and mixed cache intensive worklaods...

> 
> Signed-off-by: Yang Shi <shy828301@...il.com>
> ---
>  mm/vmscan.c | 40 +++++-----------------------------------
>  1 file changed, 5 insertions(+), 35 deletions(-)
> 
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 693a41e89969..58f4a383f0df 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -525,7 +525,6 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
>  	 */
>  	nr = count_nr_deferred(shrinker, shrinkctl);
>  
> -	total_scan = nr;
>  	if (shrinker->seeks) {
>  		delta = freeable >> priority;
>  		delta *= 4;
> @@ -539,37 +538,9 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
>  		delta = freeable / 2;
>  	}
>  
> +	total_scan = nr >> priority;

When there is low memory pressure, this will throw away a large
amount of the work that is deferred. If we are not defering in
amounts larger than ~4000 items, every pass through this code will
zero the deferred work.

Hence when we do get substantial pressure, that deferred work is no
longer being tracked. While it may help your specific corner case,
it's likely to significantly change the reclaim balance of slab
caches, especially under GFP_NOFS intensive workloads where we can
only defer the work to kswapd.

Hence I think this is still a problematic approach as it doesn't
address the reason why deferred counts are increasing out of
control in the first place....

Cheers,

Dave.
-- 
Dave Chinner
david@...morbit.com

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ