[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190130002356.GQ3973@sasha-vm>
Date: Tue, 29 Jan 2019 19:23:56 -0500
From: Sasha Levin <sashal@...nel.org>
To: Greg KH <greg@...ah.com>
Cc: Michal Hocko <mhocko@...nel.org>, Roman Gushchin <guro@...com>,
Dexuan Cui <decui@...rosoft.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Kernel Team <Kernel-team@...com>,
Shakeel Butt <shakeelb@...gle.com>,
Johannes Weiner <hannes@...xchg.org>,
Tejun Heo <tj@...nel.org>, Rik van Riel <riel@...riel.com>,
Konstantin Khlebnikov <koct9i@...il.com>,
Matthew Wilcox <willy@...radead.org>,
"Stable@...r.kernel.org" <Stable@...r.kernel.org>
Subject: Re: Will the recent memory leak fixes be backported to longterm
kernels?
On Fri, Dec 28, 2018 at 11:50:08AM +0100, Greg KH wrote:
>On Mon, Nov 05, 2018 at 10:21:23AM +0100, Michal Hocko wrote:
>> On Fri 02-11-18 19:38:35, Roman Gushchin wrote:
>> > On Fri, Nov 02, 2018 at 06:48:23PM +0100, Michal Hocko wrote:
>> > > On Fri 02-11-18 17:25:58, Roman Gushchin wrote:
>> > > > On Fri, Nov 02, 2018 at 05:51:47PM +0100, Michal Hocko wrote:
>> > > > > On Fri 02-11-18 16:22:41, Roman Gushchin wrote:
>> > > [...]
>> > > > > > 2) We do forget to scan the last page in the LRU list. So if we ended up with
>> > > > > > 1-page long LRU, it can stay there basically forever.
>> > > > >
>> > > > > Why
>> > > > > /*
>> > > > > * If the cgroup's already been deleted, make sure to
>> > > > > * scrape out the remaining cache.
>> > > > > */
>> > > > > if (!scan && !mem_cgroup_online(memcg))
>> > > > > scan = min(size, SWAP_CLUSTER_MAX);
>> > > > >
>> > > > > in get_scan_count doesn't work for that case?
>> > > >
>> > > > No, it doesn't. Let's look at the whole picture:
>> > > >
>> > > > size = lruvec_lru_size(lruvec, lru, sc->reclaim_idx);
>> > > > scan = size >> sc->priority;
>> > > > /*
>> > > > * If the cgroup's already been deleted, make sure to
>> > > > * scrape out the remaining cache.
>> > > > */
>> > > > if (!scan && !mem_cgroup_online(memcg))
>> > > > scan = min(size, SWAP_CLUSTER_MAX);
>> > > >
>> > > > If size == 1, scan == 0 => scan = min(1, 32) == 1.
>> > > > And after proportional adjustment we'll have 0.
>> > >
>> > > My friday brain hurst when looking at this but if it doesn't work as
>> > > advertized then it should be fixed. I do not see any of your patches to
>> > > touch this logic so how come it would work after them applied?
>> >
>> > This part works as expected. But the following
>> > scan = div64_u64(scan * fraction[file], denominator);
>> > reliable turns 1 page to scan to 0 pages to scan.
>>
>> OK, 68600f623d69 ("mm: don't miss the last page because of round-off
>> error") sounds like a good and safe stable backport material.
>
>Thanks for this, now queued up.
>
>greg k-h
It seems that 172b06c32b949 ("mm: slowly shrink slabs with a relatively
small number of objects") and a76cf1a474d ("mm: don't reclaim inodes
with many attached pages") cause a regression reported against the 4.19
stable tree: https://bugzilla.kernel.org/show_bug.cgi?id=202441 .
Given the history and complexity of these (and other patches from that
series) it would be nice to understand if this is something that will be
fixed soon or should we look into reverting the series for now?
--
Thanks,
Sasha
Powered by blists - more mailing lists