[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ee92d6a9-529a-4ac5-b3d0-0ff4e9085786@lucifer.local>
Date: Tue, 1 Jul 2025 06:28:55 +0100
From: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
To: Dev Jain <dev.jain@....com>
Cc: siddhartha@...ip.in, linux-mm@...ck.org, linux-kernel@...r.kernel.org,
mgorman@...e.de, Vlastimil Babka <vbabka@...e.cz>
Subject: Re: [PATCH] mm: limit THP alignment – performance gain observed in AI inference workloads
On Tue, Jul 01, 2025 at 10:53:09AM +0530, Dev Jain wrote:
>
> On 30/06/25 4:24 pm, Lorenzo Stoakes wrote:
> > +cc Vlastimil, please keep him cc'd on discussions here as the author of this
> > fix in the conversation.
> >
> > On Mon, Jun 30, 2025 at 10:55:52AM +0530, Dev Jain wrote:
> > >
> > > For this workload, do you enable mTHPs on your system? My plan is to make a
> > > similar patch for
> > >
> > > the mTHP case and I'd be grateful if you can get me some results : )
> > I'd urge caution here.
> >
> > The reason there was a big perf improvement is that, for certain workloads, the
> > original patch by Rik caused issues with VMA fragmentation. So rather than
> > getting adjacent VMAs that might later be khugepage'd, you'd get a bunch of VMAs
> > that were auto-aligned and thus fragmented from one another.
>
> How does getting two different adjacent VMAs allow them to be khugepage'd if
> both are less than PMD size? khugepaged operates per vma, I'm missing something.
(future) VMA merge
Consider allocations that are >PMD but < 2*PMD for instance. Now you get
fragmentation. For some workloads you would have previously eventually got PMD
leaf mapping, PMD leaf mapping, PMD leaf mapping, etc. contiguouosly, with this
arragenement you get PMD mapping, <bunch of PTE mappings>, PMD mapping, etc.
Powered by blists - more mailing lists