[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240626051206.mx2r4iy3wpexykay@oppo.com>
Date: Wed, 26 Jun 2024 13:12:06 +0800
From: Hailong Liu <hailong.liu@...o.com>
To: Uladzislau Rezki <urezki@...il.com>
CC: Baoquan He <bhe@...hat.com>, Nick Bowler <nbowler@...conx.ca>,
<linux-kernel@...r.kernel.org>, Linux regressions mailing list
<regressions@...ts.linux.dev>, <linux-mm@...ck.org>,
<sparclinux@...r.kernel.org>, Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: PROBLEM: kernel crashes when running xfsdump since ~6.4
On Tue, 25. Jun 22:05, Uladzislau Rezki wrote:
> > > > > > /**
> > > > > > * cpumask_next - get the next cpu in a cpumask
> > > > > > * @n: the cpu prior to the place to search (i.e. return will be > @n)
> > > > > > * @srcp: the cpumask pointer
> > > > > > *
> > > > > > * Return: >= nr_cpu_ids if no further cpus set.
> > > > >
> > > > > Ah, I got what you mean. In the vbq case, it may not have chance to get
> > > > > a return number as nr_cpu_ids. Becuase the hashed index limits the
> > > > > range to [0, nr_cpu_ids-1], and cpu_possible(index) will guarantee it
> > > > > won't be the highest cpu number [nr_cpu_ids-1] since CPU[nr_cpu_ids-1] must
> > > > > be possible CPU.
> > > > >
> > > > > Do I miss some corner cases?
> > > > >
> > > > Right. We guarantee that a highest CPU is available by doing: % nr_cpu_ids.
> > > > So we do not need to use *next_wrap() variant. You do not miss anything :)
> > > >
> > > > Hailong Liu has proposed more simpler version:
> > > >
> > > > <snip>
> > > > diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> > > > index 11fe5ea208aa..e1e63ffb9c57 100644
> > > > --- a/mm/vmalloc.c
> > > > +++ b/mm/vmalloc.c
> > > > @@ -1994,8 +1994,9 @@ static struct xarray *
> > > > addr_to_vb_xa(unsigned long addr)
> > > > {
> > > > int index = (addr / VMAP_BLOCK_SIZE) % num_possible_cpus();
> > > > + int cpu = cpumask_nth(index, cpu_possible_mask);
> > > >
> > > > - return &per_cpu(vmap_block_queue, index).vmap_blocks;
> > > > + return &per_cpu(vmap_block_queue, cpu).vmap_blocks;
> > > > <snip>
> > > >
> > > > which just takes a next CPU if an index is not set in the cpu_possible_mask.
> > > >
> > > > The only thing that can be updated in the patch is to replace num_possible_cpu()
> > > > by the nr_cpu_ids.
> > > >
> > > > Any thoughts? I think we need to fix it by a minor change so it is
> > > > easier to back-port on stable kernels.
> > >
> > > Yeah, sounds good since the regresson commit is merged in v6.3.
> > > Please feel free to post this and the hash array patch separately for
> > > formal reviewing.
> > >
> > Agreed! The patch about hash array i will post later.
> >
> > > By the way, when I am replying this mail, I check the cpumask_nth()
> > > again. I doubt it may take more checking then cpu_possible(), given most
> > > of systems don't have gaps in cpu_possible_mask. I could be dizzy at
> > > this moment.
> > >
> > > static inline unsigned int cpumask_nth(unsigned int cpu, const struct cpumask *srcp)
> > > {
> > > return find_nth_bit(cpumask_bits(srcp), small_cpumask_bits, cpumask_check(cpu));
> > > }
> > >
> > Yep, i do not think it is a big problem based on your noted fact.
> >
> Checked. There is a difference:
>
> 1. Default
>
> <snip>
> ...
> + 15.95% 6.05% [kernel] [k] __vmap_pages_range_noflush
> + 15.91% 1.74% [kernel] [k] addr_to_vb_xa <---------------
> + 15.13% 12.05% [kernel] [k] vunmap_p4d_range
> + 14.17% 13.38% [kernel] [k] __find_nth_bit <--------------
> + 10.62% 0.00% [kernel] [k] ret_from_fork_asm
> + 10.62% 0.00% [kernel] [k] ret_from_fork
> + 10.62% 0.00% [kernel] [k] kthread
> ...
> <snip>
>
> 2. Check if cpu_possible() and then fallback to cpumask_nth() if not
>
> <snip>
> ...
> + 6.84% 0.29% [kernel] [k] alloc_vmap_area
> + 6.80% 6.70% [kernel] [k] native_queued_spin_lock_slowpath
> + 4.24% 0.09% [kernel] [k] free_vmap_block
> + 2.41% 2.38% [kernel] [k] addr_to_vb_xa <-----------
> + 1.94% 1.91% [kernel] [k] xas_start
> ...
> <snip>
>
> It is _worth_ to check if an index is in possible mask:
>
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index 45e1506d58c3..af20f78c2cbf 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -2542,7 +2542,10 @@ static DEFINE_PER_CPU(struct vmap_block_queue, vmap_block_queue);
> static struct xarray *
> addr_to_vb_xa(unsigned long addr)
> {
> - int index = (addr / VMAP_BLOCK_SIZE) % num_possible_cpus();
> + int index = (addr / VMAP_BLOCK_SIZE) % nr_cpu_ids;
IIUC, use nr_cpu_ids here maybe incorrect.
take b101 as example, nr_cpu_ids is 3. if index is 2 cpumask_nth(2, cpu_possible_mask);
might return 64.
/**
* cpumask_nth_and - get the first cpu in 2 cpumasks
* @srcp1: the cpumask pointer
* @srcp2: the cpumask pointer
* @cpu: the N'th cpu to find, starting from 0 <--- N'th cpu
*
* Returns >= nr_cpu_ids if such cpu doesn't exist. <-----
*/
static inline
unsigned int cpumask_nth_and(unsigned int cpu, const struct cpumask *srcp1,
const struct cpumask *srcp2)
{
return find_nth_and_bit(cpumask_bits(srcp1), cpumask_bits(srcp2),
nr_cpumask_bits, cpumask_check(cpu));
}
I use num_possible_cpus() and cpumask_nth() here to distribute the addresses
evenly across different CPUs.
if we use cpumask_next(index) or use cpumask_nth(index, cpu_possible_mask)
becomes as follows:
CPU_0 CPU_2 CPU_2
| | |
V V V
0 10 20 30 40 50 60
|------|------|------|------|------|------|..
> +
> + if (!cpu_possible(index))
> + index = cpumask_nth(index, cpu_possible_mask);
>
> return &per_cpu(vmap_block_queue, index).vmap_blocks;
> }
>
> cpumask_nth() is not cheap. My measurements are based on a synthetic
> tight test and it detects a difference. In a real workloads it should
> not be visible. Having gaps is not a common case plus a "slow path"
> will be mitigated by the hit against possible mask.
If cpumask_nth() is not cheap or have performance regression. Perhaps we
can use the solution suggested by Haoquan. I’ve drafted as follows:
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 11fe5ea208aa..355dbfdf51f1 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -92,6 +92,7 @@ struct vfree_deferred {
struct work_struct wq;
};
static DEFINE_PER_CPU(struct vfree_deferred, vfree_deferred);
+static unsigned int *table_non_seq_cpu;
/*** Page table manipulation functions ***/
static int vmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
@@ -1995,6 +1996,10 @@ addr_to_vb_xa(unsigned long addr)
{
int index = (addr / VMAP_BLOCK_SIZE) % num_possible_cpus();
+ /* recalculate the cpuid if cpumask is not full. */
+ if (table_non_seq_cpu)
+ index = table_non_seq_cpu[index];
+
return &per_cpu(vmap_block_queue, index).vmap_blocks;
}
@@ -4473,17 +4478,25 @@ void __init vmalloc_init(void)
{
struct vmap_area *va;
struct vm_struct *tmp;
- int i;
+ int i, inx = 0;
/*
* Create the cache for vmap_area objects.
*/
vmap_area_cachep = KMEM_CACHE(vmap_area, SLAB_PANIC);
+ if (!cpumask_full(cpu_possible_mask)) {
+ table_non_seq_cpu = kzalloc(num_possible_cpus() * sizeof(unsigned int),
+ GFP_NOWAIT);
+ BUG_ON(!table_non_seq_cpu);
+ }
+
for_each_possible_cpu(i) {
struct vmap_block_queue *vbq;
struct vfree_deferred *p;
+ if (table_non_seq_cpu)
+ table_non_seq_cpu[inx++] = i;
vbq = &per_cpu(vmap_block_queue, i);
spin_lock_init(&vbq->lock);
INIT_LIST_HEAD(&vbq->free);
>
> --
> Uladzislau Rezki
--
help you, help me,
Hailong.
Powered by blists - more mailing lists