lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191004092808.GC9578@dhcp22.suse.cz>
Date:   Fri, 4 Oct 2019 11:28:08 +0200
From:   Michal Hocko <mhocko@...nel.org>
To:     David Rientjes <rientjes@...gle.com>
Cc:     Vlastimil Babka <vbabka@...e.cz>,
        Mike Kravetz <mike.kravetz@...cle.com>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Andrea Arcangeli <aarcange@...hat.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Mel Gorman <mgorman@...e.de>,
        "Kirill A. Shutemov" <kirill@...temov.name>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Linux-MM <linux-mm@...ck.org>
Subject: Re: [rfc] mm, hugetlb: allow hugepage allocations to excessively
 reclaim

On Thu 03-10-19 12:52:33, David Rientjes wrote:
> On Thu, 3 Oct 2019, Vlastimil Babka wrote:
> 
> > I think the key differences between Mike's tests and Michal's is this part
> > from Mike's mail linked above:
> > 
> > "I 'tested' by simply creating some background activity and then seeing
> > how many hugetlb pages could be allocated. Of course, many tries over
> > time in a loop."
> > 
> > - "some background activity" might be different than Michal's pre-filling
> >   of the memory with (clean) page cache
> > - "many tries over time in a loop" could mean that kswapd has time to 
> >   reclaim and eventually the new condition for pageblock order will pass
> >   every few retries, because there's enough memory for compaction and it
> >   won't return COMPACT_SKIPPED
> > 
> 
> I'll rely on Mike, the hugetlb maintainer, to assess the trade-off between 
> the potential for encountering very expensive reclaim as Andrea did and 
> the possibility of being able to allocate additional hugetlb pages at 
> runtime if we did that expensive reclaim.

That tradeoff has been expressed by __GFP_RETRY_MAYFAIL which got broken
by b39d0ee2632d.

> For parity with previous kernels it seems reasonable to ask that this 
> remains unchanged since allocating large amounts of hugetlb pages has 
> different latency expectations than during page fault.  This patch is 
> available if he'd prefer to go that route.
> 
> On the other hand, userspace could achieve similar results if it were to 
> use vm.drop_caches and explicitly triggered compaction through either 
> procfs or sysfs before writing to vm.nr_hugepages, and that would be much 
> faster because it would be done in one go.  Users who allocate through the 
> kernel command line would obviously be unaffected.

Requesting the userspace to drop _all_ page cache in order allocate a
number of hugetlb pages or any other affected __GFP_RETRY_MAYFAIL
requests is simply not reasonable IMHO.
-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ