lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20190925064516.GE23050@dhcp22.suse.cz>
Date:   Wed, 25 Sep 2019 08:45:16 +0200
From:   Michal Hocko <mhocko@...nel.org>
To:     Yang Shi <yang.shi@...ux.alibaba.com>
Cc:     linux-kernel@...r.kernel.org, mm-commits@...r.kernel.org,
        vdavydov.dev@...il.com, shakeelb@...gle.com, rientjes@...gle.com,
        ktkhai@...tuozzo.com, kirill.shutemov@...ux.intel.com,
        hughd@...gle.com, hannes@...xchg.org, cai@....pw
Subject: Re: + mm-thp-extract-split_queue_-into-a-struct.patch added to -mm
 tree

On Tue 24-09-19 09:26:37, Yang Shi wrote:
> 
> 
> On 9/24/19 6:56 AM, Michal Hocko wrote:
> > Do we really need this if deferred list is going to be shrunk more
> > pro-actively as discussed already - I am sorry I do not have a link handy
> > but in short the deferred list would be drained from a kworker context
> > more pro-actively rather than wait for the memory pressure to happen.
> 
> From our experience I really didn't see the current waiting for memory
> pressure approach is a problem, it does work well and is still a good
> compromise. And, I'm supposed we all agree the side effect incurred by the
> more proactive kworker approach is definitely a concern (i.e. may waste cpu
> cycles, break isolation, etc) according to our discussion.
>
> And we do have other much simpler ways to shrink THPs more proactively, for
> example, waking up kswapd more aggressively via tuning
> watermark_scale_factor, and/or do shrinking harder, etc.

No, we do not want to make THPs even more tricky to configure then they
are now. There are many howtos out there to recommend disabling THPs
because they might have performance or other subtle side effects. I do
not want to feed that cargo cult even more. Really users shouldn't even
notice that THPs are split because that is so much of an internal
implementation detail. Now the existing implementation really hides a
lot of memory and it requires some expertise to understand where that
memory went. And the later part is something users tend to care about
from experience.

So I really do think that waiting for the memory pressure is simply a
wrong thing to do. It causes unnecessary reclaim because some
of the pages are going to be dropped before the shrinker can act.

> Even though we have to drain THPs more proactively by whatever means in the
> future, I'd prefer it is memcg aware as well.

OK, let's keep it then until we have another means for pro-active
draining.
-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ