[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <d396be64-3a95-499f-a074-68297fa7bb48@lucifer.local>
Date: Wed, 3 Sep 2025 14:31:21 +0100
From: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
To: Vlastimil Babka <vbabka@...e.cz>
Cc: Suren Baghdasaryan <surenb@...gle.com>,
"Liam R. Howlett" <Liam.Howlett@...cle.com>,
Christoph Lameter <cl@...two.org>,
David Rientjes <rientjes@...gle.com>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Harry Yoo <harry.yoo@...cle.com>, Uladzislau Rezki <urezki@...il.com>,
linux-mm@...ck.org, linux-kernel@...r.kernel.org, rcu@...r.kernel.org,
maple-tree@...ts.infradead.org
Subject: Re: [PATCH v6 08/10] mm, vma: use percpu sheaves for vm_area_struct
cache
On Wed, Sep 03, 2025 at 02:47:28PM +0200, Vlastimil Babka wrote:
> On 9/2/25 13:13, Lorenzo Stoakes wrote:
> > On Wed, Aug 27, 2025 at 10:26:40AM +0200, Vlastimil Babka wrote:
> >> Create the vm_area_struct cache with percpu sheaves of size 32 to
> >> improve its performance.
> >>
> >> Reviewed-by: Suren Baghdasaryan <surenb@...gle.com>
> >> Signed-off-by: Vlastimil Babka <vbabka@...e.cz>
> >> ---
> >> mm/vma_init.c | 1 +
> >> 1 file changed, 1 insertion(+)
> >>
> >> diff --git a/mm/vma_init.c b/mm/vma_init.c
> >> index 8e53c7943561e7324e7992946b4065dec1149b82..52c6b55fac4519e0da39ca75ad018e14449d1d95 100644
> >> --- a/mm/vma_init.c
> >> +++ b/mm/vma_init.c
> >> @@ -16,6 +16,7 @@ void __init vma_state_init(void)
> >> struct kmem_cache_args args = {
> >> .use_freeptr_offset = true,
> >> .freeptr_offset = offsetof(struct vm_area_struct, vm_freeptr),
> >> + .sheaf_capacity = 32,
> >
> > This breaks the VMA tests.
> >
> > Please also update tools/testing/vma/vma_internal.h to add this field
> > (probably actually at the commit that adds the field in mainline :P)
>
> Thanks, one of Liam's patches to support maple tree tests with sheaves does
> that, we somehow missed that the vma test also needs it so will reorder
> accordingly.
> > I do wish we could have these tests be run by a bot :/ seems today I am
> > that bot :)
> Sorry :)
Thanks :) Just happy to find the problem and help it get sorted! :>)
Powered by blists - more mailing lists