[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191007143030.GN2381@dhcp22.suse.cz>
Date: Mon, 7 Oct 2019 16:30:30 +0200
From: Michal Hocko <mhocko@...nel.org>
To: Vlastimil Babka <vbabka@...e.cz>
Cc: Yang Shi <yang.shi@...ux.alibaba.com>,
kirill.shutemov@...ux.intel.com, ktkhai@...tuozzo.com,
hannes@...xchg.org, hughd@...gle.com, shakeelb@...gle.com,
rientjes@...gle.com, akpm@...ux-foundation.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm: thp: move deferred split queue to memcg's nodeinfo
On Mon 07-10-19 16:19:59, Vlastimil Babka wrote:
> On 10/2/19 10:43 AM, Michal Hocko wrote:
> > On Wed 02-10-19 06:16:43, Yang Shi wrote:
> >> The commit 87eaceb3faa59b9b4d940ec9554ce251325d83fe ("mm: thp: make
> >> deferred split shrinker memcg aware") makes deferred split queue per
> >> memcg to resolve memcg pre-mature OOM problem. But, all nodes end up
> >> sharing the same queue instead of one queue per-node before the commit.
> >> It is not a big deal for memcg limit reclaim, but it may cause global
> >> kswapd shrink THPs from a different node.
> >>
> >> And, 0-day testing reported -19.6% regression of stress-ng's madvise
> >> test [1]. I didn't see that much regression on my test box (24 threads,
> >> 48GB memory, 2 nodes), with the same test (stress-ng --timeout 1
> >> --metrics-brief --sequential 72 --class vm --exclude spawn,exec), I saw
> >> average -3% (run the same test 10 times then calculate the average since
> >> the test itself may have most 15% variation according to my test)
> >> regression sometimes (not every time, sometimes I didn't see regression
> >> at all).
> >>
> >> This might be caused by deferred split queue lock contention. With some
> >> configuration (i.e. just one root memcg) the lock contention my be worse
> >> than before (given 2 nodes, two locks are reduced to one lock).
> >>
> >> So, moving deferred split queue to memcg's nodeinfo to make it NUMA
> >> aware again.
> >>
> >> With this change stress-ng's madvise test shows average 4% improvement
> >> sometimes and I didn't see degradation anymore.
> >
> > My concern about this getting more and more complex
> > (http://lkml.kernel.org/r/20191002084014.GH15624@dhcp22.suse.cz) holds
> > here even more. Can we step back and reconsider the whole thing please?
>
> What about freeing immediately after split via workqueue and also have a
> synchronous version called before going oom? Maybe there would be also
> other things that would benefit from this scheme instead of traditional
> reclaim and shrinkers?
That is exactly what we have discussed some time ago.
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists