[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210106155602.6ce48dfe88ca7b94986b329b@linux-foundation.org>
Date: Wed, 6 Jan 2021 15:56:02 -0800
From: Andrew Morton <akpm@...ux-foundation.org>
To: Sudarshan Rajagopalan <sudaraja@...eaurora.org>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Vladimir Davydov <vdavydov.dev@...il.com>,
Dave Chinner <david@...morbit.com>
Subject: Re: [PATCH] mm: vmscan: support complete shrinker reclaim
(cc's added)
On Tue, 5 Jan 2021 16:43:38 -0800 Sudarshan Rajagopalan <sudaraja@...eaurora.org> wrote:
> Ensure that shrinkers are given the option to completely drop
> their caches even when their caches are smaller than the batch size.
> This change helps improve memory headroom by ensuring that under
> significant memory pressure shrinkers can drop all of their caches.
> This change only attempts to more aggressively call the shrinkers
> during background memory reclaim, inorder to avoid hurting the
> performance of direct memory reclaim.
>
> ...
>
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -424,6 +424,10 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
> long batch_size = shrinker->batch ? shrinker->batch
> : SHRINK_BATCH;
> long scanned = 0, next_deferred;
> + long min_cache_size = batch_size;
> +
> + if (current_is_kswapd())
> + min_cache_size = 0;
>
> if (!(shrinker->flags & SHRINKER_NUMA_AWARE))
> nid = 0;
> @@ -503,7 +507,7 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
> * scanning at high prio and therefore should try to reclaim as much as
> * possible.
> */
> - while (total_scan >= batch_size ||
> + while (total_scan > min_cache_size ||
> total_scan >= freeable) {
> unsigned long ret;
> unsigned long nr_to_scan = min(batch_size, total_scan);
I don't really see the need to exclude direct reclaim from this fix.
And if we're leaving unscanned objects behind in this situation, the
current code simply isn't working as intended, and 0b1fb40a3b1 ("mm:
vmscan: shrink all slab objects if tight on memory") either failed to
achieve its objective or was later broken?
Vladimir, could you please take a look?
Powered by blists - more mailing lists