[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZcC7Kgew3GDFNIux@tiehlicka>
Date: Mon, 5 Feb 2024 11:40:42 +0100
From: Michal Hocko <mhocko@...e.com>
To: "T.J. Mercier" <tjmercier@...gle.com>
Cc: Johannes Weiner <hannes@...xchg.org>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Shakeel Butt <shakeelb@...gle.com>,
Muchun Song <muchun.song@...ux.dev>,
Andrew Morton <akpm@...ux-foundation.org>,
Efly Young <yangyifei03@...ishou.com>, android-mm@...gle.com,
yuzhao@...gle.com, mkoutny@...e.com,
Yosry Ahmed <yosryahmed@...gle.com>, cgroups@...r.kernel.org,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3] mm: memcg: Use larger batches for proactive reclaim
On Fri 02-02-24 23:38:54, T.J. Mercier wrote:
> Before 388536ac291 ("mm:vmscan: fix inaccurate reclaim during proactive
> reclaim") we passed the number of pages for the reclaim request directly
> to try_to_free_mem_cgroup_pages, which could lead to significant
> overreclaim. After 0388536ac291 the number of pages was limited to a
> maximum 32 (SWAP_CLUSTER_MAX) to reduce the amount of overreclaim.
> However such a small batch size caused a regression in reclaim
> performance due to many more reclaim start/stop cycles inside
> memory_reclaim.
You have mentioned that in one of the previous emails but it is good to
mention what is the source of that overhead for the future reference.
> Reclaim tries to balance nr_to_reclaim fidelity with fairness across
> nodes and cgroups over which the pages are spread. As such, the bigger
> the request, the bigger the absolute overreclaim error. Historic
> in-kernel users of reclaim have used fixed, small sized requests to
> approach an appropriate reclaim rate over time. When we reclaim a user
> request of arbitrary size, use decaying batch sizes to manage error while
> maintaining reasonable throughput.
These numbers are with MGLRU or the default reclaim implementation?
> root - full reclaim pages/sec time (sec)
> pre-0388536ac291 : 68047 10.46
> post-0388536ac291 : 13742 inf
> (reclaim-reclaimed)/4 : 67352 10.51
>
> /uid_0 - 1G reclaim pages/sec time (sec) overreclaim (MiB)
> pre-0388536ac291 : 258822 1.12 107.8
> post-0388536ac291 : 105174 2.49 3.5
> (reclaim-reclaimed)/4 : 233396 1.12 -7.4
>
> /uid_0 - full reclaim pages/sec time (sec)
> pre-0388536ac291 : 72334 7.09
> post-0388536ac291 : 38105 14.45
> (reclaim-reclaimed)/4 : 72914 6.96
>
> Fixes: 0388536ac291 ("mm:vmscan: fix inaccurate reclaim during proactive reclaim")
> Signed-off-by: T.J. Mercier <tjmercier@...gle.com>
> Reviewed-by: Yosry Ahmed <yosryahmed@...gle.com>
> Acked-by: Johannes Weiner <hannes@...xchg.org>
>
> ---
> v3: Formatting fixes per Yosry Ahmed and Johannes Weiner. No functional
> changes.
> v2: Simplify the request size calculation per Johannes Weiner and Michal Koutný
>
> mm/memcontrol.c | 6 ++++--
> 1 file changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 46d8d02114cf..f6ab61128869 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -6976,9 +6976,11 @@ static ssize_t memory_reclaim(struct kernfs_open_file *of, char *buf,
> if (!nr_retries)
> lru_add_drain_all();
>
> + /* Will converge on zero, but reclaim enforces a minimum */
> + unsigned long batch_size = (nr_to_reclaim - nr_reclaimed) / 4;
This doesn't fit into the existing coding style. I do not think there is
a strong reason to go against it here.
> +
> reclaimed = try_to_free_mem_cgroup_pages(memcg,
> - min(nr_to_reclaim - nr_reclaimed, SWAP_CLUSTER_MAX),
> - GFP_KERNEL, reclaim_options);
> + batch_size, GFP_KERNEL, reclaim_options);
Also with the increased reclaim target do we need something like this?
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 4f9c854ce6cc..94794cf5ee9f 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1889,7 +1889,7 @@ static unsigned long shrink_inactive_list(unsigned long nr_to_scan,
/* We are about to die and free our memory. Return now. */
if (fatal_signal_pending(current))
- return SWAP_CLUSTER_MAX;
+ return sc->nr_to_reclaim;
}
lru_add_drain();
>
> if (!reclaimed && !nr_retries--)
> return -EAGAIN;
> --
> 2.43.0.594.gd9cf4e227d-goog
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists