[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZnlkmkDAi2CtgwDF@pc636>
Date: Mon, 24 Jun 2024 14:20:42 +0200
From: Uladzislau Rezki <urezki@...il.com>
To: Nick Bowler <nbowler@...conx.ca>
Cc: linux-kernel@...r.kernel.org,
Linux regressions mailing list <regressions@...ts.linux.dev>,
linux-mm@...ck.org, sparclinux@...r.kernel.org,
"Uladzislau Rezki (Sony)" <urezki@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: PROBLEM: kernel crashes when running xfsdump since ~6.4
> On 2024-06-20 02:19, Nick Bowler wrote:
> > After upgrading my sparc to 6.9.5 I noticed that attempting to run
> > xfsdump instantly (within a couple seconds) and reliably crashes the
> > kernel. The same problem is also observed on 6.10-rc4.
> [...]
> > 062eacf57ad91b5c272f89dc964fd6dd9715ea7d is the first bad commit
> > commit 062eacf57ad91b5c272f89dc964fd6dd9715ea7d
> > Author: Uladzislau Rezki (Sony) <urezki@...il.com>
> > Date: Thu Mar 30 21:06:38 2023 +0200
> >
> > mm: vmalloc: remove a global vmap_blocks xarray
>
> I think I might see what is happening here.
>
> On this machine, there are two CPUs numbered 0 and 2 (there is no CPU1).
>
> The per-cpu variables in mm/vmalloc.c are initialized like this, in
> vmalloc_init
>
> for_each_possible_cpu(i) {
> /* ... */
> vbq = &per_cpu(vmap_block_queue, i);
> /* initialize stuff in vbq */
> }
>
> This loops over the set bits of cpu_possible_mask, bits 0 and 2 are set,
> so it initializes stuff with i=0 and i=2, skipping i=1 (I added prints to
> confirm this).
>
> Then, in vm_map_ram, with the problematic change it calls the new
> function addr_to_vb_xa, which does this:
>
> int index = (addr / VMAP_BLOCK_SIZE) % num_possible_cpus();
> return &per_cpu(vmap_block_queue, index).vmap_blocks;
>
> The num_possible_cpus() function counts the number of set bits in
> cpu_possible_mask, so it returns 2. Thus, index is either 0 or 1, which
> does not correspond to what was initialized (0 or 2). The crash occurs
> when the computed index is 1 in this function. In this case, the
> returned value appears to be garbage (I added prints to confirm this).
>
> If I change addr_to_vb_xa function to this:
>
> int index = ((addr / VMAP_BLOCK_SIZE) & 1) << 1; /* 0 or 2 */
> return &per_cpu(vmap_block_queue, index).vmap_blocks;
>
> xfsdump is working again.
>
Could you please test below?
<snip>
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 5d3aa2dc88a8..1733946f7a12 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -5087,7 +5087,13 @@ void __init vmalloc_init(void)
*/
vmap_area_cachep = KMEM_CACHE(vmap_area, SLAB_PANIC);
- for_each_possible_cpu(i) {
+ /*
+ * We use "nr_cpu_ids" here because some architectures
+ * may have "gaps" in cpu-possible-mask. It is OK for
+ * per-cpu approaches but is not OK for cases where it
+ * can be used as hashes also.
+ */
+ for (i = 0; i < nr_cpu_ids; i++) {
struct vmap_block_queue *vbq;
struct vfree_deferred *p;
<snip>
Thank you in advance and i really appreciate for finding this
issue!
--
Uladzislau Rezki
Powered by blists - more mailing lists