[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALvZod6kfF_r5u2ydZ34Q+6QWvg11ZFwfRMHdiNUvi3NJnms=A@mail.gmail.com>
Date: Tue, 22 Sep 2020 11:56:13 -0700
From: Shakeel Butt <shakeelb@...gle.com>
To: Michal Hocko <mhocko@...e.com>
Cc: Minchan Kim <minchan@...nel.org>,
Johannes Weiner <hannes@...xchg.org>,
Roman Gushchin <guro@...com>, Greg Thelen <gthelen@...gle.com>,
David Rientjes <rientjes@...gle.com>,
Michal Koutný <mkoutny@...e.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Linux MM <linux-mm@...ck.org>,
Cgroups <cgroups@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>,
Yang Shi <shy828301@...il.com>
Subject: Re: [PATCH] memcg: introduce per-memcg reclaim interface
On Tue, Sep 22, 2020 at 11:31 AM Michal Hocko <mhocko@...e.com> wrote:
>
> On Tue 22-09-20 11:10:17, Shakeel Butt wrote:
> > On Tue, Sep 22, 2020 at 9:55 AM Michal Hocko <mhocko@...e.com> wrote:
> [...]
> > > So far I have learned that you are primarily working around an
> > > implementation detail in the zswap which is doing the swapout path
> > > directly in the pageout path.
> >
> > Wait how did you reach this conclusion? I have explicitly said that we
> > are not using uswapd like functionality in production. We are using
> > this interface for proactive reclaim and proactive reclaim is not a
> > workaround for implementation detail in the zswap.
>
> Hmm, I must have missed the distinction between the two you have
> mentioned. Correct me if I am wrong but "latency sensitive" workload is
> the one that cannot use the high limit, right.
Yes.
> For some reason I thought
> that your pro-active reclaim usecase is also not compatible with the
> throttling imposed by the high limit. Hence my conclusion above.
>
For proactive reclaim use-case, it is more about the weirdness of
using memory.high interface for proactive reclaim.
Let's suppose I want to reclaim 20 MiB from a job. To use memory.high,
I have to read memory.current and subtract 20MiB from it and then
write that to memory.high and once that is done, I have to set
memory.high to the previous value (job's original high limit).
There is a time window where the allocation of the target job can hit
the temporary memory.high which will cause uninteresting MEMCG_HIGH
event, PSI pressure and can potentially over reclaim. Also there is a
race between reading memory.current and setting the temporary
memory.high. There are many non-deterministic variables added to the
request of reclaiming 20MiB from a job.
Shakeel
Powered by blists - more mailing lists