[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAOUHufa9BySrKZ5ws9xJoEsdUfbErb4V=2=JSm-dB9B7zMyJbQ@mail.gmail.com>
Date: Mon, 22 Jan 2024 19:24:56 -0700
From: Yu Zhao <yuzhao@...gle.com>
To: "T.J. Mercier" <tjmercier@...gle.com>
Cc: Johannes Weiner <hannes@...xchg.org>, Michal Hocko <mhocko@...nel.org>,
Roman Gushchin <roman.gushchin@...ux.dev>, Shakeel Butt <shakeelb@...gle.com>,
Muchun Song <muchun.song@...ux.dev>, Andrew Morton <akpm@...ux-foundation.org>, android-mm@...gle.com,
yangyifei03@...ishou.com, cgroups@...r.kernel.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] Revert "mm:vmscan: fix inaccurate reclaim during
proactive reclaim"
On Sun, Jan 21, 2024 at 2:44 PM T.J. Mercier <tjmercier@...gle.com> wrote:
>
> This reverts commit 0388536ac29104a478c79b3869541524caec28eb.
>
> Proactive reclaim on the root cgroup is 10x slower after this patch when
> MGLRU is enabled, and completion times for proactive reclaim on much
> smaller non-root cgroups take ~30% longer (with or without MGLRU). With
> root reclaim before the patch, I observe average reclaim rates of
> ~70k pages/sec before try_to_free_mem_cgroup_pages starts to fail and
> the nr_retries counter starts to decrement, eventually ending the
> proactive reclaim attempt. After the patch the reclaim rate is
> consistently ~6.6k pages/sec due to the reduced nr_pages value causing
> scan aborts as soon as SWAP_CLUSTER_MAX pages are reclaimed. The
> proactive reclaim doesn't complete after several minutes because
> try_to_free_mem_cgroup_pages is still capable of reclaiming pages in
> tiny SWAP_CLUSTER_MAX page chunks and nr_retries is never decremented.
>
> The docs for memory.reclaim say, "the kernel can over or under reclaim
> from the target cgroup" which this patch was trying to fix. Revert it
> until a less costly solution is found.
>
> Signed-off-by: T.J. Mercier <tjmercier@...gle.com>
Fixes: 0388536ac291 ("mm:vmscan: fix inaccurate reclaim during
proactive reclaim")
Cc: <stable@...r.kernel.org>
Powered by blists - more mailing lists