[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20181102172547.GA19042@tower.DHCP.thefacebook.com>
Date: Fri, 2 Nov 2018 17:25:58 +0000
From: Roman Gushchin <guro@...com>
To: Michal Hocko <mhocko@...nel.org>
CC: Dexuan Cui <decui@...rosoft.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Kernel Team <Kernel-team@...com>,
"Shakeel Butt" <shakeelb@...gle.com>,
Johannes Weiner <hannes@...xchg.org>,
Tejun Heo <tj@...nel.org>, Rik van Riel <riel@...riel.com>,
Konstantin Khlebnikov <koct9i@...il.com>,
Matthew Wilcox <willy@...radead.org>,
"Stable@...r.kernel.org" <Stable@...r.kernel.org>
Subject: Re: Will the recent memory leak fixes be backported to longterm
kernels?
On Fri, Nov 02, 2018 at 05:51:47PM +0100, Michal Hocko wrote:
> On Fri 02-11-18 16:22:41, Roman Gushchin wrote:
> > On Fri, Nov 02, 2018 at 05:13:14PM +0100, Michal Hocko wrote:
> > > On Fri 02-11-18 15:48:57, Roman Gushchin wrote:
> > > > On Fri, Nov 02, 2018 at 09:03:55AM +0100, Michal Hocko wrote:
> > > > > On Fri 02-11-18 02:45:42, Dexuan Cui wrote:
> > > > > [...]
> > > > > > I totally agree. I'm now just wondering if there is any temporary workaround,
> > > > > > even if that means we have to run the kernel with some features disabled or
> > > > > > with a suboptimal performance?
> > > > >
> > > > > One way would be to disable kmem accounting (cgroup.memory=nokmem kernel
> > > > > option). That would reduce the memory isolation because quite a lot of
> > > > > memory will not be accounted for but the primary source of in-flight and
> > > > > hard to reclaim memory will be gone.
> > > >
> > > > In my experience disabling the kmem accounting doesn't really solve the issue
> > > > (without patches), but can lower the rate of the leak.
> > >
> > > This is unexpected. 90cbc2508827e was introduced to address offline
> > > memcgs to be reclaim even when they are small. But maybe you mean that
> > > we still leak in an absence of the memory pressure. Or what does prevent
> > > memcg from going down?
> >
> > There are 3 independent issues which are contributing to this leak:
> > 1) Kernel stack accounting weirdness: processes can reuse stack accounted to
> > different cgroups. So basically any running process can take a reference to any
> > cgroup.
>
> yes, but kmem accounting should rule that out, right? If not then this
> is a clear bug and easy to backport because that would mean to add a
> missing memcg_kmem_enabled check.
Yes, you're right, disabling kmem accounting should mitigate this problem.
>
> > 2) We do forget to scan the last page in the LRU list. So if we ended up with
> > 1-page long LRU, it can stay there basically forever.
>
> Why
> /*
> * If the cgroup's already been deleted, make sure to
> * scrape out the remaining cache.
> */
> if (!scan && !mem_cgroup_online(memcg))
> scan = min(size, SWAP_CLUSTER_MAX);
>
> in get_scan_count doesn't work for that case?
No, it doesn't. Let's look at the whole picture:
size = lruvec_lru_size(lruvec, lru, sc->reclaim_idx);
scan = size >> sc->priority;
/*
* If the cgroup's already been deleted, make sure to
* scrape out the remaining cache.
*/
if (!scan && !mem_cgroup_online(memcg))
scan = min(size, SWAP_CLUSTER_MAX);
If size == 1, scan == 0 => scan = min(1, 32) == 1.
And after proportional adjustment we'll have 0.
So, disabling kmem accounting mitigates 2 other issues, but not this one.
Anyway, I'd prefer to wait a bit for test results, and backport the whole
series as a whole.
Thanks!
Powered by blists - more mailing lists