lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Fri, 21 Jun 2024 15:07:16 +0800
From: Baoquan He <bhe@...hat.com>
To: Hailong Liu <hailong.liu@...o.com>, Nick Bowler <nbowler@...conx.ca>
Cc: linux-kernel@...r.kernel.org,
	Linux regressions mailing list <regressions@...ts.linux.dev>,
	linux-mm@...ck.org, sparclinux@...r.kernel.org,
	"Uladzislau Rezki (Sony)" <urezki@...il.com>,
	Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: PROBLEM: kernel crashes when running xfsdump since ~6.4

On 06/21/24 at 11:30am, Hailong Liu wrote:
> On Thu, 20. Jun 14:02, Nick Bowler wrote:
> > On 2024-06-20 02:19, Nick Bowler wrote:
> > > After upgrading my sparc to 6.9.5 I noticed that attempting to run
> > > xfsdump instantly (within a couple seconds) and reliably crashes the
> > > kernel.  The same problem is also observed on 6.10-rc4.
> > [...]
> > >   062eacf57ad91b5c272f89dc964fd6dd9715ea7d is the first bad commit
> > >   commit 062eacf57ad91b5c272f89dc964fd6dd9715ea7d
> > >   Author: Uladzislau Rezki (Sony) <urezki@...il.com>
> > >   Date:   Thu Mar 30 21:06:38 2023 +0200
> > >
> > >       mm: vmalloc: remove a global vmap_blocks xarray
> >
> > I think I might see what is happening here.
> >
> > On this machine, there are two CPUs numbered 0 and 2 (there is no CPU1).
> >
> +Baoquan

Thanks for adding me, Hailong.

> 
> Ahh, I thought you are right. addr_to_vb_xa assume that the CPU numbers are
> contiguous. I don't have knowledge about CPU at all.
> Technically change the implement addr_to_vb_xa() to
> return &per_cpu(vmap_block_queue, raw_smp_processor_id()).vmap_blocks;
> would also work, but it violate the load balance. Wating for
> experts reply.

Yeah, I think so as you explained.

> 
> > The per-cpu variables in mm/vmalloc.c are initialized like this, in
> > vmalloc_init
> >
> >   for_each_possible_cpu(i) {
> >     /* ... */
> >     vbq = &per_cpu(vmap_block_queue, i);
> >     /* initialize stuff in vbq */
> >   }
> >
> > This loops over the set bits of cpu_possible_mask, bits 0 and 2 are set,
> > so it initializes stuff with i=0 and i=2, skipping i=1 (I added prints to
> > confirm this).
> >
> > Then, in vm_map_ram, with the problematic change it calls the new
> > function addr_to_vb_xa, which does this:
> >
> >   int index = (addr / VMAP_BLOCK_SIZE) % num_possible_cpus();
> >   return &per_cpu(vmap_block_queue, index).vmap_blocks;
> >
> > The num_possible_cpus() function counts the number of set bits in
> > cpu_possible_mask, so it returns 2.  Thus, index is either 0 or 1, which
> > does not correspond to what was initialized (0 or 2).  The crash occurs
> > when the computed index is 1 in this function.  In this case, the
> > returned value appears to be garbage (I added prints to confirm this).

This is a great catch. 

> >
> > If I change addr_to_vb_xa function to this:
> >
> >   int index = ((addr / VMAP_BLOCK_SIZE) & 1) << 1; /* 0 or 2 */
> >   return &per_cpu(vmap_block_queue, index).vmap_blocks;

Yeah, while above change is not generic, e.g if it's CPU0 and CPU3.
I think we should take the max possible CPU number as the hush bucket
size. The vb->va is also got from global free_vmap_area, so no need to
worry about the waste.

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index be2dd281ea76..18e87cafbaf2 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2542,7 +2542,7 @@ static DEFINE_PER_CPU(struct vmap_block_queue, vmap_block_queue);
 static struct xarray *
 addr_to_vb_xa(unsigned long addr)
 {
-	int index = (addr / VMAP_BLOCK_SIZE) % num_possible_cpus();
+	int index = (addr / VMAP_BLOCK_SIZE) % nr_cpu_ids;
 
 	return &per_cpu(vmap_block_queue, index).vmap_blocks;
 }


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ