[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190128215230.GA32069@castle.DHCP.thefacebook.com>
Date: Mon, 28 Jan 2019 21:52:40 +0000
From: Roman Gushchin <guro@...com>
To: Chris Down <chris@...isdown.name>
CC: Andrew Morton <akpm@...ux-foundation.org>,
Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...nel.org>, Tejun Heo <tj@...nel.org>,
Dennis Zhou <dennis@...nel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"cgroups@...r.kernel.org" <cgroups@...r.kernel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
Kernel Team <Kernel-team@...com>
Subject: Re: [PATCH] mm: Proportional memory.{low,min} reclaim
On Mon, Jan 28, 2019 at 04:42:13PM -0500, Chris Down wrote:
> Roman Gushchin writes:
> > Hm, it looks a bit suspicious to me.
> >
> > Let's say memory.low = 3G, memory.min = 1G and memory.current = 2G.
> > cgroup_size / protection == 1, so scan doesn't depend on memory.min at all.
> >
> > So, we need to look directly at memory.emin in memcg_low_reclaim case, and
> > ignore memory.(e)low.
>
> Hmm, this isn't really a common situation that I'd thought about, but it
> seems reasonable to make the boundaries when in low reclaim to be between
> min and low, rather than 0 and low. I'll add another patch with that. Thanks
It's not a stopper, so I'm perfectly fine with a follow-up patch.
>
> > > + scan = clamp(scan, SWAP_CLUSTER_MAX, lruvec_size);
> >
> > Idk, how much sense does it have to make it larger than SWAP_CLUSTER_MAX,
> > given that it will become 0 on default (and almost any other) priority.
>
> In my testing, setting the scan target to 0 and thus reducing scope for
> reclaim can result in increasing the scan priority more than is desirable,
> and since we base some vm heuristics based on that, that seemed concerning.
>
> I'd rather start being a bit more cautious, erring on the side of scanning
> at least some pages from this memcg when priority gets elevated.
>
> Thanks for the review!
For the rest of the patch:
Reviewed-by: Roman Gushchin <guro@...com>
Thanks!
Powered by blists - more mailing lists