lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAAa6QmSiuFF6Oe0-j+eD0ma2tZAbhZuwENDYSZQSBrh1oeaLdA@mail.gmail.com>
Date:   Thu, 9 Mar 2023 16:05:13 -0800
From:   "Zach O'Keefe" <zokeefe@...gle.com>
To:     Mike Kravetz <mike.kravetz@...cle.com>
Cc:     Peter Xu <peterx@...hat.com>, David Hildenbrand <david@...hat.com>,
        Rik van Riel <riel@...riel.com>,
        Mike Rapoport <rppt@...nel.org>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: THP backed thread stacks

On Thu, Mar 9, 2023 at 3:33 PM Mike Kravetz <mike.kravetz@...cle.com> wrote:
>
> On 03/09/23 14:38, Zach O'Keefe wrote:
> > On Wed, Mar 8, 2023 at 11:02 AM Mike Kravetz <mike.kravetz@...cle.com> wrote:
> > >
> > > On 03/06/23 16:40, Mike Kravetz wrote:
> > > > On 03/06/23 19:15, Peter Xu wrote:
> > > > > On Mon, Mar 06, 2023 at 03:57:30PM -0800, Mike Kravetz wrote:
> > > > > >
> > > > > > Just wondering if there is anything better or more selective that can be
> > > > > > done?  Does it make sense to have THP backed stacks by default?  If not,
> > > > > > who would be best at disabling?  A couple thoughts:
> > > > > > - The kernel could disable huge pages on stacks.  libpthread/glibc pass
> > > > > >   the unused flag MAP_STACK.  We could key off this and disable huge pages.
> > > > > >   However, I'm sure there is somebody somewhere today that is getting better
> > > > > >   performance because they have huge pages backing their stacks.
> > > > > > - We could push this to glibc/libpthreads and have them use
> > > > > >   MADV_NOHUGEPAGE on thread stacks.  However, this also has the potential
> > > > > >   of regressing performance if somebody somewhere is getting better
> > > > > >   performance due to huge pages.
> > > > >
> > > > > Yes it seems it's always not safe to change a default behavior to me.
> > > > >
> > > > > For stack I really can't tell why it must be different here.  I assume the
> > > > > problem is the wasted space and it exaggerates easily with N-threads.  But
> > > > > IIUC it'll be the same as thp to normal memories iiuc, e.g., there can be a
> > > > > per-thread mmap() of 2MB even if only 4K is used each, then if such mmap()
> > > > > is populated by THP for each thread there'll also be a huge waste.
> > >
> > > I may be alone in my thinking here, but it seems that stacks by their nature
> > > are not generally good candidates for huge pages.  I am just thinking about
> > > the 'normal' use case where stacks contain local function data and arguments.
> > > Am I missing something, or are huge pages really a benefit here?
> > >
> > > Of course, I can imagine some thread with a large amount of frequently
> > > accessed data allocated on it's stack which could benefit from huge
> > > pages.  But, this seems to be an exception rather than the rule.
> > >
> > > I understand the argument that THP always means always and everywhere.
> > > It just seems that thread stacks may be 'special enough' to consider
> > > disabling by default
> >
> > Just my drive-by 2c, but would agree with you here (at least wrt
> > hugepages not being good candidates, in general). A user mmap()'ing
> > memory has a lot more (direct) control over how they fault / utilize
> > the memory: you know when you're running out of space and can map more
> > space as needed. For these stacks, you're setting the stack size to
> > 2MB just as a precaution so you can avoid overflow -- AFAIU there is
> > no intention of using the whole mapping (and looking at some data,
> > it's very likely you won't come close).
> >
> > That said, why bother setting stack attribute to 2MiB in size if there
> > isn't some intention of possibly being THP-backed? Moreover, how did
> > it happen that the mappings were always hugepage-aligned here?
>
> I do not have the details as to why the Java group chose 2MB for stack
> size.  My 'guess' is that they are trying to save on virtual space (although
> that seems silly).  2MB is actually reducing the default size.  The
> default pthread stack size on my desktop (fedora) is 8MB [..]

Oh, that's interesting -- I did not know that. That's huge.

> [..]  This also is
> a nice multiple of THP size.
>
> I think the hugepage alignment in their environment was somewhat luck.
> One suggestion made was to change stack size to avoid alignment and
> hugepage usage.  That 'works' but seems kind of hackish.

That was my first thought, if the alignment was purely due to luck,
and not somebody manually specifying it. Agreed it's kind of hackish
if anyone can get bit by this by sheer luck.

> Also, David H pointed out the somewhat recent commit to align sufficiently
> large mappings to THP boundaries.  This is going to make all stacks huge
> page aligned.

I think that change was reverted by Linus in commit 0ba09b173387
("Revert "mm: align larger anonymous mappings on THP boundaries""),
until it's perf regressions were better understood -- and I haven't
seen a revamp of it.

> --
> Mike Kravetz

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ