lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.10.1507311358541.5910@chino.kir.corp.google.com>
Date:	Fri, 31 Jul 2015 14:09:07 -0700 (PDT)
From:	David Rientjes <rientjes@...gle.com>
To:	Jörn Engel <joern@...estorage.com>
cc:	Mike Kravetz <mike.kravetz@...cle.com>,
	"linux-mm@...ck.org" <linux-mm@...ck.org>,
	linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: hugetlb pages not accounted for in rss

On Thu, 30 Jul 2015, Jörn Engel wrote:

> > If I want to track hugetlb usage on a per-task basis, do I then need to
> > create one cgroup per task?
> > 

I think this would only be used for debugging or testing, but if you have 
root and are trying to organize processes into a hugetlb_cgroup hierarchy, 
presumably you would just look at smaps and find each thread's hugetlb 
memory usage and not bother.

> Maybe some background is useful.  I would absolutely love to use
> transparent hugepages.  They are absolutely perfect in every respect,
> except for performance.  With transparent hugepages we get higher
> latencies.  Small pages are unacceptable, so we are forced to use
> non-transparent hugepages.
> 

Believe me, we are on the same page that way :)  We still deploy 
configurations with hugetlb memory because we need to meet certain 
allocation requirements and it is only possible to do at boot.

With regard to the performance of thp, I can think of two things that are 
affecting you:

 - allocation cost

   Async memory compaction in the page fault path for thp memory is very
   lightweight and it happily falls back to using small pages instead.
   Memory compaction is always being improved upon and there is on-going
   work to do memory compaction both periodically and in the background to
   keep fragmentation low.  The ultimate goal would be to remove async
   compaction entirely from the thp page fault path and rely on 
   improvements to memory compaction such that we have a great allocation
   success rate and less cost when we fail.

 - NUMA cost

   Until very recently, thp pages could easily be allocated remotely
   instead of small pages locally.  That has since been improved and we
   only allocate thp locally and then fallback to small pages locally
   first.  Khugepaged can still migrate memory remotely, but it will
   allocate the hugepage on the node where the majority of smallpages
   are from.

> The part of our system that uses small pages is pretty much constant,
> while total system memory follows Moore's law.  When possible we even
> try to shrink that part.  Hugepages already dominate today and things
> will get worse.
> 

I wrote a patchset, hugepages overcommit, that allows unmapped hugetlb 
pages to be freed in oom conditions before calling the oom killer up to a 
certain threshold and then kickoff a background thread to try to 
reallocate them.  The idea is to keep the hugetlb pool as large as 
possible up to oom and then only reclaim what is needed and then try to 
reallocate them.  Not sure if it would help your particular usecase or 
not.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ