[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240126163401.GJ1567330@cmpxchg.org>
Date: Fri, 26 Jan 2024 11:34:01 -0500
From: Johannes Weiner <hannes@...xchg.org>
To: "T.J. Mercier" <tjmercier@...gle.com>
Cc: Michal Hocko <mhocko@...e.com>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Shakeel Butt <shakeelb@...gle.com>,
Muchun Song <muchun.song@...ux.dev>,
Andrew Morton <akpm@...ux-foundation.org>, android-mm@...gle.com,
yuzhao@...gle.com, yangyifei03@...ishou.com,
cgroups@...r.kernel.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] Revert "mm:vmscan: fix inaccurate reclaim during
proactive reclaim"
On Wed, Jan 24, 2024 at 09:46:23AM -0800, T.J. Mercier wrote:
> In the meantime, instead of a revert how about changing the batch size
> geometrically instead of the SWAP_CLUSTER_MAX constant:
>
> reclaimed = try_to_free_mem_cgroup_pages(memcg,
> - min(nr_to_reclaim -
> nr_reclaimed, SWAP_CLUSTER_MAX),
> + (nr_to_reclaim - nr_reclaimed)/2,
> GFP_KERNEL, reclaim_options);
>
> I think that should address the overreclaim concern (it was mentioned
> that the upper bound of overreclaim was 2 * request), and this should
> also increase the reclaim rate for root reclaim with MGLRU closer to
> what it was before.
Hahaha. Would /4 work for you?
I genuinely think the idea is worth a shot. /4 would give us a bit
more margin for error, since the bailout/fairness cutoffs have changed
back and forth over time. And it should still give you a reasonable
convergence on MGLRU.
try_to_free_reclaim_pages() already does max(nr_to_reclaim,
SWAP_CLUSTER_MAX) which will avoid the painful final approach loops
the integer division would produce on its own.
Please add a comment mentioning the compromise between the two reclaim
implementations though.
Powered by blists - more mailing lists