lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8128c0338e5df5476ec9fd6eb3079964@kenip.in>
Date: Mon, 30 Jun 2025 17:18:02 +0530
From: siddhartha@...ip.in
To: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
Cc: Dev Jain <dev.jain@....com>, linux-mm@...ck.org,
 linux-kernel@...r.kernel.org, mgorman@...e.de
Subject: Re: [PATCH] mm: limit THP alignment – performance gain observed in AI inference workloads

On 2025-06-30 16:24, Lorenzo Stoakes wrote:
> +cc Vlastimil, please keep him cc'd on discussions here as the author 
> of this
> fix in the conversation.
> 
> On Mon, Jun 30, 2025 at 10:55:52AM +0530, Dev Jain wrote:
>> 
>> 
>> For this workload, do you enable mTHPs on your system? My plan is to 
>> make a
>> similar patch for
>> 
>> the mTHP case and I'd be grateful if you can get me some results : )
> 
> I'd urge caution here.
> 
> The reason there was a big perf improvement is that, for certain 
> workloads, the
> original patch by Rik caused issues with VMA fragmentation. So rather 
> than
> getting adjacent VMAs that might later be khugepage'd, you'd get a 
> bunch of VMAs
> that were auto-aligned and thus fragmented from one another.
> 
> So while you got speed ups on some workloads, you got really bad perf 
> impact on
> some that were subject to this.
> 
> The observed speed up was on a very specific benchmark also. While it's 
> a great
> improvement, it's important to understand the context (see the original 
> patch
> for details [0]).
> 
> I do think it's worth considering changing 
> thp_get_unmapped_area_vmflags() for
> mTHP, as it's currently very limited (just PMD alignment) and it'd 
> possibly be
> sensible to change this to checking against allowed THP alignments, but 
> I'd not
> assume this is going to get some crazy speed up as observed here.
> 
> Note that any such change would probably require some refactoring in 
> THP first
> to make it not quite so awful.
> 
> I also think for Siddharta's usecase mTHP isn't really relevant is it, 
> as intel
> do not support mTHP currently do they?
> 
> Regards, Lorenzo
> 
> [0]: 
> https://lore.kernel.org/all/20241024151228.101841-2-vbabka@suse.cz/T/#u

Hi Lorenzo, Dev, All,

Thank you for the thoughtful responses and for engaging with the 
performance implications of the patch.

You're absolutely right that the observed speedup came from a specific 
class of workloads — in this case, token-length-variable AI inference 
pipelines based on Hugging Face Transformers and ONNX Runtime. These 
workloads trigger highly dynamic, anonymous memory allocation patterns, 
often in bursts aligned with model shard loading and attention map 
resizing. In such cases, VMA fragmentation due to PMD-aligned, 
non-PMD-sized mappings led to near-complete loss of THP utilization.

Once the alignment restriction was lifted (via Rik’s patch), we observed 
substantial restoration of THP behavior, which is where the performance 
gains came from. That said, I completely agree that:

Not all workloads benefit from this

Some may even regress if the underlying VMAs aren't THP-coalescible for 
other reasons

Still, for high-throughput inference workloads on modern Intel CPUs, 
this behavior isn’t a corner case. The shift toward multi-model 
concurrent serving (e.g., LLM-as-a-Service) means this dynamic 
allocation pattern is becoming common, especially in 
edge/latency-sensitive deployments.

🧠 On mTHP: Intel Does Support It
Regarding mTHP — yes, Intel platforms (especially server-grade Xeon 
processors from Cascade Lake onward) do support mapped transparent huge 
pages, including via:

tmpfs-backed files

madvise(MADV_HUGEPAGE) on file mappings

shmem usage with shmem_enabled in the kernel

So I’d say mTHP is certainly relevant for workloads where model weights 
or tensors are pre-cached or memory-mapped — a pattern we’re also seeing 
as Hugging Face, ONNX, and PyTorch ecosystems move toward zero-copy 
tensor sharing.

Given that, I'd absolutely be interested in testing any mTHP-targeted 
patch — and I’d be happy to help validate it, especially if it avoids 
the VMA fragmentation pitfall you rightly pointed out.

Thanks again for the detailed feedback, and I’ll try to replicate and 
share further traces (from my local testbed) since I currently don’t 
have access to the original Intel Developer Cloud logs.

Best regards,
Siddhartha Sharma

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ