lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190822152934.w6ztolutdix6kbvc@box>
Date:   Thu, 22 Aug 2019 18:29:34 +0300
From:   "Kirill A. Shutemov" <kirill@...temov.name>
To:     Vlastimil Babka <vbabka@...e.cz>
Cc:     Michal Hocko <mhocko@...nel.org>, kirill.shutemov@...ux.intel.com,
        Yang Shi <yang.shi@...ux.alibaba.com>, hannes@...xchg.org,
        rientjes@...gle.com, akpm@...ux-foundation.org, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [v2 PATCH -mm] mm: account deferred split THPs into MemAvailable

On Thu, Aug 22, 2019 at 02:56:56PM +0200, Vlastimil Babka wrote:
> On 8/22/19 10:04 AM, Michal Hocko wrote:
> > On Thu 22-08-19 01:55:25, Yang Shi wrote:
> >> Available memory is one of the most important metrics for memory
> >> pressure.
> > 
> > I would disagree with this statement. It is a rough estimate that tells
> > how much memory you can allocate before going into a more expensive
> > reclaim (mostly swapping). Allocating that amount still might result in
> > direct reclaim induced stalls. I do realize that this is simple metric
> > that is attractive to use and works in many cases though.
> > 
> >> Currently, the deferred split THPs are not accounted into
> >> available memory, but they are reclaimable actually, like reclaimable
> >> slabs.
> >> 
> >> And, they seems very common with the common workloads when THP is
> >> enabled.  A simple run with MariaDB test of mmtest with THP enabled as
> >> always shows it could generate over fifteen thousand deferred split THPs
> >> (accumulated around 30G in one hour run, 75% of 40G memory for my VM).
> >> It looks worth accounting in MemAvailable.
> > 
> > OK, this makes sense. But your above numbers are really worrying.
> > Accumulating such a large amount of pages that are likely not going to
> > be used is really bad. They are essentially blocking any higher order
> > allocations and also push the system towards more memory pressure.
> > 
> > IIUC deferred splitting is mostly a workaround for nasty locking issues
> > during splitting, right? This is not really an optimization to cache
> > THPs for reuse or something like that. What is the reason this is not
> > done from a worker context? At least THPs which would be freed
> > completely sound like a good candidate for kworker tear down, no?
> 
> Agreed that it's a good question. For Kirill :) Maybe with kworker approach we
> also wouldn't need the cgroup awareness?

I don't remember a particular locking issue, but I cannot say there's
none :P

It's artifact from decoupling PMD split from compound page split: the same
page can be mapped multiple times with combination of PMDs and PTEs. Split
of one PMD doesn't need to trigger split of all PMDs and underlying
compound page.

Other consideration is the fact that page split can fail and we need to
have fallback for this case.

Also in most cases THP split would be just waste of time if we would do
them at the spot. If you don't have memory pressure it's better to wait
until process termination: less pages on LRU is still beneficial.

Main source of partly mapped THPs comes from exit path. When PMD mapping
of THP got split across multiple VMAs (for instance due to mprotect()),
in exit path we unmap PTEs belonging to one VMA just before unmapping the
rest of the page. It would be total waste of time to split the page in
this scenario.

The whole deferred split thing still looks as a reasonable compromise
to me.

We may have some kind of watermark and try to keep the number of deferred
split THP under it. But it comes with own set of problems: what if all
these pages are pinned for really long time and effectively not available
for split.

-- 
 Kirill A. Shutemov

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ