[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZnsjIB2byIxSgbjc@pc636>
Date: Tue, 25 Jun 2024 22:05:52 +0200
From: Uladzislau Rezki <urezki@...il.com>
To: Baoquan He <bhe@...hat.com>
Cc: Nick Bowler <nbowler@...conx.ca>, Hailong Liu <hailong.liu@...o.com>,
linux-kernel@...r.kernel.org,
Linux regressions mailing list <regressions@...ts.linux.dev>,
linux-mm@...ck.org, sparclinux@...r.kernel.org,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: PROBLEM: kernel crashes when running xfsdump since ~6.4
> > > > > /**
> > > > > * cpumask_next - get the next cpu in a cpumask
> > > > > * @n: the cpu prior to the place to search (i.e. return will be > @n)
> > > > > * @srcp: the cpumask pointer
> > > > > *
> > > > > * Return: >= nr_cpu_ids if no further cpus set.
> > > >
> > > > Ah, I got what you mean. In the vbq case, it may not have chance to get
> > > > a return number as nr_cpu_ids. Becuase the hashed index limits the
> > > > range to [0, nr_cpu_ids-1], and cpu_possible(index) will guarantee it
> > > > won't be the highest cpu number [nr_cpu_ids-1] since CPU[nr_cpu_ids-1] must
> > > > be possible CPU.
> > > >
> > > > Do I miss some corner cases?
> > > >
> > > Right. We guarantee that a highest CPU is available by doing: % nr_cpu_ids.
> > > So we do not need to use *next_wrap() variant. You do not miss anything :)
> > >
> > > Hailong Liu has proposed more simpler version:
> > >
> > > <snip>
> > > diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> > > index 11fe5ea208aa..e1e63ffb9c57 100644
> > > --- a/mm/vmalloc.c
> > > +++ b/mm/vmalloc.c
> > > @@ -1994,8 +1994,9 @@ static struct xarray *
> > > addr_to_vb_xa(unsigned long addr)
> > > {
> > > int index = (addr / VMAP_BLOCK_SIZE) % num_possible_cpus();
> > > + int cpu = cpumask_nth(index, cpu_possible_mask);
> > >
> > > - return &per_cpu(vmap_block_queue, index).vmap_blocks;
> > > + return &per_cpu(vmap_block_queue, cpu).vmap_blocks;
> > > <snip>
> > >
> > > which just takes a next CPU if an index is not set in the cpu_possible_mask.
> > >
> > > The only thing that can be updated in the patch is to replace num_possible_cpu()
> > > by the nr_cpu_ids.
> > >
> > > Any thoughts? I think we need to fix it by a minor change so it is
> > > easier to back-port on stable kernels.
> >
> > Yeah, sounds good since the regresson commit is merged in v6.3.
> > Please feel free to post this and the hash array patch separately for
> > formal reviewing.
> >
> Agreed! The patch about hash array i will post later.
>
> > By the way, when I am replying this mail, I check the cpumask_nth()
> > again. I doubt it may take more checking then cpu_possible(), given most
> > of systems don't have gaps in cpu_possible_mask. I could be dizzy at
> > this moment.
> >
> > static inline unsigned int cpumask_nth(unsigned int cpu, const struct cpumask *srcp)
> > {
> > return find_nth_bit(cpumask_bits(srcp), small_cpumask_bits, cpumask_check(cpu));
> > }
> >
> Yep, i do not think it is a big problem based on your noted fact.
>
Checked. There is a difference:
1. Default
<snip>
...
+ 15.95% 6.05% [kernel] [k] __vmap_pages_range_noflush
+ 15.91% 1.74% [kernel] [k] addr_to_vb_xa <---------------
+ 15.13% 12.05% [kernel] [k] vunmap_p4d_range
+ 14.17% 13.38% [kernel] [k] __find_nth_bit <--------------
+ 10.62% 0.00% [kernel] [k] ret_from_fork_asm
+ 10.62% 0.00% [kernel] [k] ret_from_fork
+ 10.62% 0.00% [kernel] [k] kthread
...
<snip>
2. Check if cpu_possible() and then fallback to cpumask_nth() if not
<snip>
...
+ 6.84% 0.29% [kernel] [k] alloc_vmap_area
+ 6.80% 6.70% [kernel] [k] native_queued_spin_lock_slowpath
+ 4.24% 0.09% [kernel] [k] free_vmap_block
+ 2.41% 2.38% [kernel] [k] addr_to_vb_xa <-----------
+ 1.94% 1.91% [kernel] [k] xas_start
...
<snip>
It is _worth_ to check if an index is in possible mask:
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 45e1506d58c3..af20f78c2cbf 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2542,7 +2542,10 @@ static DEFINE_PER_CPU(struct vmap_block_queue, vmap_block_queue);
static struct xarray *
addr_to_vb_xa(unsigned long addr)
{
- int index = (addr / VMAP_BLOCK_SIZE) % num_possible_cpus();
+ int index = (addr / VMAP_BLOCK_SIZE) % nr_cpu_ids;
+
+ if (!cpu_possible(index))
+ index = cpumask_nth(index, cpu_possible_mask);
return &per_cpu(vmap_block_queue, index).vmap_blocks;
}
cpumask_nth() is not cheap. My measurements are based on a synthetic
tight test and it detects a difference. In a real workloads it should
not be visible. Having gaps is not a common case plus a "slow path"
will be mitigated by the hit against possible mask.
--
Uladzislau Rezki
Powered by blists - more mailing lists