[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHbLzkpk+6+kzsxmJ_MK+708rpCEjB2njnarLkzfzXX-MUyG7g@mail.gmail.com>
Date: Fri, 3 Feb 2023 11:07:30 -0800
From: Yang Shi <shy828301@...il.com>
To: Roman Gushchin <roman.gushchin@...ux.dev>
Cc: Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...e.com>,
Shakeel Butt <shakeelb@...gle.com>, Tejun Heo <tj@...nel.org>,
linux-mm@...ck.org, cgroups@...r.kernel.org,
linux-kernel@...r.kernel.org,
Christian Brauner <brauner@...nel.org>
Subject: Re: [RFC PATCH] mm: memcontrol: don't account swap failures not due
to cgroup limits
On Fri, Feb 3, 2023 at 11:00 AM Roman Gushchin <roman.gushchin@...ux.dev> wrote:
>
> On Thu, Feb 02, 2023 at 10:56:26AM -0500, Johannes Weiner wrote:
> > Christian reports the following situation in a cgroup that doesn't
> > have memory.swap.max configured:
> >
> > $ cat memory.swap.events
> > high 0
> > max 0
> > fail 6218
> >
> > Upon closer examination, this is an ARM64 machine that doesn't support
> > swapping out THPs.
>
> Do we expect it to be added any time soon or it's caused by some system
> limitations?
AFAIK, it has been supported since 6.0. See commit d0637c505f8a1
>
> > In that case, the first get_swap_page() fails, and
> > the kernel falls back to splitting the THP and swapping the 4k
> > constituents one by one. /proc/vmstat confirms this with a high rate
> > of thp_swpout_fallback events.
> >
> > While the behavior can ultimately be explained, it's unexpected and
> > confusing. I see three choices how to address this:
> >
> > a) Specifically exlude THP fallbacks from being counted, as the
> > failure is transient and the memory is ultimately swapped.
> >
> > Arguably, though, the user would like to know if their cgroup's
> > swap limit is causing high rates of THP splitting during swapout.
>
> I agree, but it's probably better to reflect it in a form of a per-memcg
> thp split failure counter (e.g. in memory.stat), not as swap out failures.
> Overall option a) looks preferable to me. Especially if in the long run
> the arm64 limitation will be fixed.
>
> >
> > b) Only count cgroup swap events when they are actually due to a
> > cgroup's own limit. Exclude failures that are due to physical swap
> > shortage or other system-level conditions (like !THP_SWAP). Also
> > count them at the level where the limit is configured, which may be
> > above the local cgroup that holds the page-to-be-swapped.
> >
> > This is in line with how memory.swap.high, memory.high and
> > memory.max events are counted.
> >
> > However, it's a change in documented behavior.
>
> I'm not sure about this option: I can easily imagine a setup with a
> memcg-specific swap space, which would require setting an artificial
> memory.swap.max to get the fail counter working. On the other side not a deal
> breaker.
>
> Thanks!
>
Powered by blists - more mailing lists