lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191008145537.GP6681@dhcp22.suse.cz>
Date:   Tue, 8 Oct 2019 16:55:37 +0200
From:   Michal Hocko <mhocko@...nel.org>
To:     "Kirill A. Shutemov" <kirill@...temov.name>
Cc:     Vlastimil Babka <vbabka@...e.cz>,
        Yang Shi <yang.shi@...ux.alibaba.com>,
        kirill.shutemov@...ux.intel.com, ktkhai@...tuozzo.com,
        hannes@...xchg.org, hughd@...gle.com, shakeelb@...gle.com,
        rientjes@...gle.com, akpm@...ux-foundation.org, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm: thp: move deferred split queue to memcg's nodeinfo

On Tue 08-10-19 17:44:37, Kirill A. Shutemov wrote:
> On Mon, Oct 07, 2019 at 04:30:30PM +0200, Michal Hocko wrote:
> > On Mon 07-10-19 16:19:59, Vlastimil Babka wrote:
> > > On 10/2/19 10:43 AM, Michal Hocko wrote:
> > > > On Wed 02-10-19 06:16:43, Yang Shi wrote:
> > > >> The commit 87eaceb3faa59b9b4d940ec9554ce251325d83fe ("mm: thp: make
> > > >> deferred split shrinker memcg aware") makes deferred split queue per
> > > >> memcg to resolve memcg pre-mature OOM problem.  But, all nodes end up
> > > >> sharing the same queue instead of one queue per-node before the commit.
> > > >> It is not a big deal for memcg limit reclaim, but it may cause global
> > > >> kswapd shrink THPs from a different node.
> > > >>
> > > >> And, 0-day testing reported -19.6% regression of stress-ng's madvise
> > > >> test [1].  I didn't see that much regression on my test box (24 threads,
> > > >> 48GB memory, 2 nodes), with the same test (stress-ng --timeout 1
> > > >> --metrics-brief --sequential 72  --class vm --exclude spawn,exec), I saw
> > > >> average -3% (run the same test 10 times then calculate the average since
> > > >> the test itself may have most 15% variation according to my test)
> > > >> regression sometimes (not every time, sometimes I didn't see regression
> > > >> at all).
> > > >>
> > > >> This might be caused by deferred split queue lock contention.  With some
> > > >> configuration (i.e. just one root memcg) the lock contention my be worse
> > > >> than before (given 2 nodes, two locks are reduced to one lock).
> > > >>
> > > >> So, moving deferred split queue to memcg's nodeinfo to make it NUMA
> > > >> aware again.
> > > >>
> > > >> With this change stress-ng's madvise test shows average 4% improvement
> > > >> sometimes and I didn't see degradation anymore.
> > > > 
> > > > My concern about this getting more and more complex
> > > > (http://lkml.kernel.org/r/20191002084014.GH15624@dhcp22.suse.cz) holds
> > > > here even more. Can we step back and reconsider the whole thing please?
> > > 
> > > What about freeing immediately after split via workqueue and also have a
> > > synchronous version called before going oom? Maybe there would be also
> > > other things that would benefit from this scheme instead of traditional
> > > reclaim and shrinkers?
> > 
> > That is exactly what we have discussed some time ago.
> 
> Yes, I've posted the patch:
> 
> http://lkml.kernel.org/r/20190827125911.boya23eowxhqmopa@box
> 
> But I still not sure that the approach is right. I expect it to trigger
> performance regressions. For system with pleanty of free memory, we will
> just pay split cost for nothing in many cases.

I suspect it got lost in the email thread. Care to send as a separate
RFC patch? We can put it to mm for a cycle or two to see how it behaves.
The patch seems quite simple and straightforward from a very quick
glance. It is a bit of a hack that it piggybacks on top of the shrinker
code which should ideally go away if this approach works but that is a
minor detail.

-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ