[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <D6EDEBF1F91015459DB866AC4EE162CC023C402E@IRSMSX103.ger.corp.intel.com>
Date: Fri, 6 May 2016 15:10:00 +0000
From: "Odzioba, Lukasz" <lukasz.odzioba@...el.com>
To: Michal Hocko <mhocko@...nel.org>
CC: "Hansen, Dave" <dave.hansen@...el.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"Shutemov, Kirill" <kirill.shutemov@...el.com>,
"Anaczkowski, Lukasz" <lukasz.anaczkowski@...el.com>
Subject: RE: mm: pages are not freed from lru_add_pvecs after process
termination
On Thu 05-05-16 09:21:00, Michal Hocko wrote:
> Or maybe the async nature of flushing turns
> out to be just impractical and unreliable and we will end up skipping
> THP (or all compound pages) for pcp LRU add cache. Let's see...
What if we simply skip lru_add pvecs for compound pages?
That way we still have compound pages on LRU's, but the problem goes
away. It is not quite what this naïve patch does, but it works nice for me.
diff --git a/mm/swap.c b/mm/swap.c
index 03aacbc..c75d5e1 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -392,7 +392,9 @@ static void __lru_cache_add(struct page *page)
get_page(page);
if (!pagevec_space(pvec))
__pagevec_lru_add(pvec);
pagevec_add(pvec, page);
+ if (PageCompound(page))
+ __pagevec_lru_add(pvec);
put_cpu_var(lru_add_pvec);
}
Do we have any tests that I could use to measure performance impact
of such changes before I start to tweak it up? Or maybe it doesn't make
sense at all ?
Thanks,
Lukas
Powered by blists - more mailing lists