lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAPTztWZvg0ZUES0DpXj2JXndvfuZXzSXbDmJ3e+80hjvywFWzg@mail.gmail.com>
Date: Tue, 11 Feb 2025 08:49:21 -0800
From: Frank van der Linden <fvdl@...gle.com>
To: Luiz Capitulino <luizcap@...hat.com>
Cc: Oscar Salvador <osalvador@...e.de>, linux-kernel@...r.kernel.org, yaozhenguo1@...il.com, 
	muchun.song@...ux.dev, linux-mm@...ck.org, akpm@...ux-foundation.org, 
	david@...hat.com, rppt@...nel.org
Subject: Re: [PATCH] mm: hugetlb: avoid fallback for specific node allocation
 of 1G pages

On Tue, Feb 11, 2025 at 6:51 AM Luiz Capitulino <luizcap@...hat.com> wrote:
>
> On 2025-02-11 04:06, Oscar Salvador wrote:
> > On Mon, Feb 10, 2025 at 10:48:56PM -0500, Luiz Capitulino wrote:
> >> When using the HugeTLB kernel command-line to allocate 1G pages from
> >> a specific node, such as:
> >>
> >>     default_hugepagesz=1G hugepages=1:1
> >>
> >> If node 1 happens to not have enough memory for the requested number of
> >> 1G pages, the allocation falls back to other nodes. A quick way to
> >> reproduce this is by creating a KVM guest with a memory-less node and
> >> trying to allocate 1 1G page from it. Instead of failing, the allocation
> >> will fallback to other nodes.
> >>
> >> This defeats the purpose of node specific allocation. Also, specific
> >> node allocation for 2M pages don't have this behavior: the allocation
> >> will just fail for the pages it can't satisfy.
> >>
> >> This issue happens because HugeTLB calls memblock_alloc_try_nid_raw()
> >> for 1G boot-time allocation as this function falls back to other nodes
> >> if the allocation can't be satisfied. Use memblock_alloc_exact_nid_raw()
> >> instead, which ensures that the allocation will only be satisfied from
> >> the specified node.
> >>
> >> Fixes: b5389086ad7b ("hugetlbfs: extend the definition of hugepages parameter to support node allocation")
> >>
> >> Signed-off-by: Luiz Capitulino <luizcap@...hat.com>
> >
> > Acked-by: Oscar Salvador <osalvador@...e.de>
> >
> > This was discussed yesterday in [1], ccing Frank for awareness.
> >
> > [1] https://patchwork.kernel.org/project/linux-mm/patch/20250206185109.1210657-6-fvdl@google.com/
>
> Interesting, thanks for the reference.
>
> I stumbled over this issue back in December when debugging a HugeTLB issue
> at Red Hat (David knows it ;) ) and had this patch pending for more than a
> week now...
>

Looks good, I'll drop the same change from my upcoming v4 series. This
will create a contextual dependency, but that's ok, this one will go
in first in any case.

Reviewed-by: Frank van der Linden <fvdl@...gle.com>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ