[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240131175059.GC1227330@cmpxchg.org>
Date: Wed, 31 Jan 2024 12:50:59 -0500
From: Johannes Weiner <hannes@...xchg.org>
To: "T.J. Mercier" <tjmercier@...gle.com>
Cc: Michal Hocko <mhocko@...nel.org>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Shakeel Butt <shakeelb@...gle.com>,
Muchun Song <muchun.song@...ux.dev>,
Andrew Morton <akpm@...ux-foundation.org>,
Efly Young <yangyifei03@...ishou.com>, android-mm@...gle.com,
yuzhao@...gle.com, cgroups@...r.kernel.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm: memcg: Use larger chunks for proactive reclaim
On Wed, Jan 31, 2024 at 04:24:41PM +0000, T.J. Mercier wrote:
> Before 388536ac291 ("mm:vmscan: fix inaccurate reclaim during proactive
> reclaim") we passed the number of pages for the reclaim request directly
> to try_to_free_mem_cgroup_pages, which could lead to significant
> overreclaim in order to achieve fairness. After 0388536ac291 the number
> of pages was limited to a maxmimum of 32 (SWAP_CLUSTER_MAX) to reduce
> the amount of overreclaim. However such a small chunk size caused a
> regression in reclaim performance due to many more reclaim start/stop
> cycles inside memory_reclaim.
>
> Instead of limiting reclaim chunk size to the SWAP_CLUSTER_MAX constant,
> adjust the chunk size proportionally with number of pages left to
> reclaim. This allows for higher reclaim efficiency with large chunk
> sizes during the beginning of memory_reclaim, and reduces the amount of
> potential overreclaim by using small chunk sizes as the total reclaim
> amount is approached. Using 1/4 of the amount left to reclaim as the
> chunk size gives a good compromise between reclaim performance and
> overreclaim:
>
> root - full reclaim pages/sec time (sec)
> pre-0388536ac291 : 68047 10.46
> post-0388536ac291 : 13742 inf
> (reclaim-reclaimed)/4 : 67352 10.51
>
> /uid_0 - 1G reclaim pages/sec time (sec) overreclaim (MiB)
> pre-0388536ac291 : 258822 1.12 107.8
> post-0388536ac291 : 105174 2.49 3.5
> (reclaim-reclaimed)/4 : 233396 1.12 -7.4
>
> /uid_0 - full reclaim pages/sec time (sec)
> pre-0388536ac291 : 72334 7.09
> post-0388536ac291 : 38105 14.45
> (reclaim-reclaimed)/4 : 72914 6.96
>
> Fixes: 0388536ac291 ("mm:vmscan: fix inaccurate reclaim during proactive reclaim")
> Signed-off-by: T.J. Mercier <tjmercier@...gle.com>
> ---
> mm/memcontrol.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 46d8d02114cf..d68fb89eadd2 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -6977,7 +6977,8 @@ static ssize_t memory_reclaim(struct kernfs_open_file *of, char *buf,
> lru_add_drain_all();
>
> reclaimed = try_to_free_mem_cgroup_pages(memcg,
> - min(nr_to_reclaim - nr_reclaimed, SWAP_CLUSTER_MAX),
> + max((nr_to_reclaim - nr_reclaimed) / 4,
> + (nr_to_reclaim - nr_reclaimed) % 4),
I don't see why the % 4 is needed. It only kicks in when the delta
drops below 4, but try_to_free_mem_cgroup_pages() already has
.nr_to_reclaim = max(nr_pages, SWAP_CLUSTER_MAX),
so it looks like dead code.
Powered by blists - more mailing lists