[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZkP1kW_DZdCdTn7m@P9FQF9L96D>
Date: Tue, 14 May 2024 16:36:49 -0700
From: Roman Gushchin <roman.gushchin@...ux.dev>
To: Johannes Weiner <hannes@...xchg.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Michal Hocko <mhocko@...nel.org>,
Shakeel Butt <shakeel.butt@...ux.dev>,
Rik van Riel <riel@...riel.com>, linux-mm@...ck.org,
cgroups@...r.kernel.org, linux-kernel@...r.kernel.org,
kernel-team@...com
Subject: Re: [PATCH] mm: vmscan: restore incremental cgroup iteration
On Tue, May 14, 2024 at 04:26:41PM -0400, Johannes Weiner wrote:
> Currently, reclaim always walks the entire cgroup tree in order to
> ensure fairness between groups. While overreclaim is limited in
> shrink_lruvec(), many of our systems have a sizable number of active
> groups, and an even bigger number of idle cgroups with cache left
> behind by previous jobs; the mere act of walking all these cgroups can
> impose significant latency on direct reclaimers.
>
> In the past, we've used a save-and-restore iterator that enabled
> incremental tree walks over multiple reclaim invocations. This ensured
> fairness, while keeping the work of individual reclaimers small.
>
> However, in edge cases with a lot of reclaim concurrency, individual
> reclaimers would sometimes not see enough of the cgroup tree to make
> forward progress and (prematurely) declare OOM. Consequently we
> switched to comprehensive walks in 1ba6fc9af35b ("mm: vmscan: do not
> share cgroup iteration between reclaimers").
>
> To address the latency problem without bringing back the premature OOM
> issue, reinstate the shared iteration, but with a restart condition to
> do the full walk in the OOM case - similar to what we do for
> memory.low enforcement and active page protection.
>
> In the worst case, we do one more full tree walk before declaring
> OOM. But the vast majority of direct reclaim scans can then finish
> much quicker, while fairness across the tree is maintained:
>
> - Before this patch, we observed that direct reclaim always takes more
> than 100us and most direct reclaim time is spent in reclaim cycles
> lasting between 1ms and 1 second. Almost 40% of direct reclaim time
> was spent on reclaim cycles exceeding 100ms.
>
> - With this patch, almost all page reclaim cycles last less than 10ms,
> and a good amount of direct page reclaim finishes in under 100us. No
> page reclaim cycles lasting over 100ms were observed anymore.
>
> The shared iterator state is maintaned inside the target cgroup, so
> fair and incremental walks are performed during both global reclaim
> and cgroup limit reclaim of complex subtrees.
>
> Reported-by: Rik van Riel <riel@...riel.com>
> Signed-off-by: Johannes Weiner <hannes@...xchg.org>
> Signed-off-by: Rik van Riel <riel@...riel.com>
Looks really solid.
Reviewed-by: Roman Gushchin <roman.gushchin@...ux.dev>
Thanks!
Powered by blists - more mailing lists