[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <59404249-de0c-c569-d04a-9da38ed14b0a@redhat.com>
Date: Tue, 19 Apr 2022 14:56:06 +0200
From: David Hildenbrand <david@...hat.com>
To: Roman Gushchin <roman.gushchin@...ux.dev>, linux-mm@...ck.org
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Dave Chinner <dchinner@...hat.com>,
linux-kernel@...r.kernel.org, Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...nel.org>,
Shakeel Butt <shakeelb@...gle.com>,
Yang Shi <shy828301@...il.com>
Subject: Re: [PATCH] mm: do not call add_nr_deferred() with zero deferred
On 16.04.22 02:41, Roman Gushchin wrote:
> add_nr_deferred() is often called with next_deferred equal to 0.
> For instance, it's happening under low memory pressure for any
> shrinkers with a low number of cached objects. A corresponding trace
> looks like:
> <...>-619914 [005] .... 467456.345160: mm_shrink_slab_end: \
> super_cache_scan+0x0/0x1a0 0000000087027f06: nid: 1 \
> unused scan count 0 new scan count 0 total_scan 0 \
> last shrinker return val 0
>
> <...>-619914 [005] .... 467456.345371: mm_shrink_slab_end: \
> super_cache_scan+0x0/0x1a0 0000000087027f06: nid: 1 \
> unused scan count 0 new scan count 0 total_scan 0 \
> last shrinker return val 0
>
> <...>-619914 [005] .... 467456.345380: mm_shrink_slab_end: \
> super_cache_scan+0x0/0x1a0 0000000087027f06: nid: 1 unused \
> scan count 0 new scan count 0 total_scan 0 \
> last shrinker return val 0
>
> This lead to unnecessary checks and atomic operations, which can be
> avoided by checking next_deferred for not being zero before calling
> add_nr_deferred(). In this case the mm_shrink_slab_end trace point
> will get a potentially slightly outdated "new scan count" value, but
> it's totally fine.
Sufficient improvement to justify added complexity for anybody reading
that code?
>
> Signed-off-by: Roman Gushchin <roman.gushchin@...ux.dev>
> ---
> mm/vmscan.c | 5 ++++-
> 1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index d4a7d2bd276d..19d3d4fa1aad 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -808,7 +808,10 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
> * move the unused scan count back into the shrinker in a
> * manner that handles concurrent updates.
> */
> - new_nr = add_nr_deferred(next_deferred, shrinker, shrinkctl);
> + if (next_deferred)
> + new_nr = add_nr_deferred(next_deferred, shrinker, shrinkctl);
> + else
> + new_nr = nr;
>
> trace_mm_shrink_slab_end(shrinker, shrinkctl->nid, freed, nr, new_nr, total_scan);
> return freed;
And if we still want to do this optimization, why not put it into
add_nr_deferred()?
--
Thanks,
David / dhildenb
Powered by blists - more mailing lists