[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aGVJq4zcQh550cRQ@pc638.lan>
Date: Wed, 2 Jul 2025 17:00:59 +0200
From: Uladzislau Rezki <urezki@...il.com>
To: Jeongjun Park <aha310510@...il.com>
Cc: Uladzislau Rezki <urezki@...il.com>, akpm@...ux-foundation.org,
edumazet@...gle.com, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v6] mm/vmalloc: fix data race in show_numa_info()
On Wed, Jul 02, 2025 at 11:01:23PM +0900, Jeongjun Park wrote:
> Uladzislau Rezki <urezki@...il.com> wrote:
> >
> > On Wed, Jul 02, 2025 at 01:01:13AM +0900, Jeongjun Park wrote:
> > > Uladzislau Rezki <urezki@...il.com> wrote:
> > > >
> > > > On Thu, May 08, 2025 at 07:48:55PM +0200, Uladzislau Rezki wrote:
> > > > > On Fri, May 09, 2025 at 01:56:20AM +0900, Jeongjun Park wrote:
> > > > > > The following data-race was found in show_numa_info():
> > > > > >
> > > > > > ==================================================================
> > > > > > BUG: KCSAN: data-race in vmalloc_info_show / vmalloc_info_show
> > > > > >
> > > > > > read to 0xffff88800971fe30 of 4 bytes by task 8289 on cpu 0:
> > > > > > show_numa_info mm/vmalloc.c:4936 [inline]
> > > > > > vmalloc_info_show+0x5a8/0x7e0 mm/vmalloc.c:5016
> > > > > > seq_read_iter+0x373/0xb40 fs/seq_file.c:230
> > > > > > proc_reg_read_iter+0x11e/0x170 fs/proc/inode.c:299
> > > > > > ....
> > > > > >
> > > > > > write to 0xffff88800971fe30 of 4 bytes by task 8287 on cpu 1:
> > > > > > show_numa_info mm/vmalloc.c:4934 [inline]
> > > > > > vmalloc_info_show+0x38f/0x7e0 mm/vmalloc.c:5016
> > > > > > seq_read_iter+0x373/0xb40 fs/seq_file.c:230
> > > > > > proc_reg_read_iter+0x11e/0x170 fs/proc/inode.c:299
> > > > > > ....
> > > > > >
> > > > > > value changed: 0x0000008f -> 0x00000000
> > > > > > ==================================================================
> > > > > >
> > > > > > According to this report,there is a read/write data-race because m->private
> > > > > > is accessible to multiple CPUs. To fix this, instead of allocating the heap
> > > > > > in proc_vmalloc_init() and passing the heap address to m->private,
> > > > > > vmalloc_info_show() should allocate the heap.
> > > > > >
> > > > > > Fixes: a47a126ad5ea ("vmallocinfo: add NUMA information")
> > > > > > Suggested-by: Eric Dumazet <edumazet@...gle.com>
> > > > > > Suggested-by: Uladzislau Rezki (Sony) <urezki@...il.com>
> > > > > > Suggested-by: Andrew Morton <akpm@...ux-foundation.org>
> > > > > > Signed-off-by: Jeongjun Park <aha310510@...il.com>
> > > > > > ---
> > > > > > v6: Change CONFIG_NUMA to must be check before doing any work related to the counters array.
> > > > > > - Link to v5: https://lore.kernel.org/all/20250508160800.12540-1-aha310510@gmail.com/
> > > > > > v5: Change heap to be allocated only when CONFIG_NUMA is enabled
> > > > > > - Link to v4: https://lore.kernel.org/all/20250508065558.149091-1-aha310510@gmail.com/
> > > > > > v4: Change the way counters array heap is allocated, per Andrew Morton's suggestion.
> > > > > > And fix it to call smp_rmb() in the correct location.
> > > > > > - Link to v3: https://lore.kernel.org/all/20250507142552.9446-1-aha310510@gmail.com/
> > > > > > v3: Following Uladzislau Rezki's suggestion, we check v->flags beforehand
> > > > > > to avoid printing uninitialized members of vm_struct.
> > > > > > - Link to v2: https://lore.kernel.org/all/20250506082520.84153-1-aha310510@gmail.com/
> > > > > > v2: Refactoring some functions and fix patch as per Eric Dumazet suggestion
> > > > > > - Link to v1: https://lore.kernel.org/all/20250505171948.24410-1-aha310510@gmail.com/
> > > > > > ---
> > > > > > mm/vmalloc.c | 63 +++++++++++++++++++++++++++++-----------------------
> > > > > > 1 file changed, 35 insertions(+), 28 deletions(-)
> > > > > >
> > > > > > diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> > > > > > index 3ed720a787ec..c1ea9713a1c0 100644
> > > > > > --- a/mm/vmalloc.c
> > > > > > +++ b/mm/vmalloc.c
> > > > > > @@ -3100,7 +3100,7 @@ static void clear_vm_uninitialized_flag(struct vm_struct *vm)
> > > > > > /*
> > > > > > * Before removing VM_UNINITIALIZED,
> > > > > > * we should make sure that vm has proper values.
> > > > > > - * Pair with smp_rmb() in show_numa_info().
> > > > > > + * Pair with smp_rmb() in vread_iter() and vmalloc_info_show().
> > > > > > */
> > > > > > smp_wmb();
> > > > > > vm->flags &= ~VM_UNINITIALIZED;
> > > > > > @@ -4914,28 +4914,29 @@ bool vmalloc_dump_obj(void *object)
> > > > > > #endif
> > > > > >
> > > > > > #ifdef CONFIG_PROC_FS
> > > > > > -static void show_numa_info(struct seq_file *m, struct vm_struct *v)
> > > > > > -{
> > > > > > - if (IS_ENABLED(CONFIG_NUMA)) {
> > > > > > - unsigned int nr, *counters = m->private;
> > > > > > - unsigned int step = 1U << vm_area_page_order(v);
> > > > > >
> > > > > > - if (!counters)
> > > > > > - return;
> > > > > > +/*
> > > > > > + * Print number of pages allocated on each memory node.
> > > > > > + *
> > > > > > + * This function can only be called if CONFIG_NUMA is enabled
> > > > > > + * and VM_UNINITIALIZED bit in v->flags is disabled.
> > > > > > + */
> > > > > > +static void show_numa_info(struct seq_file *m, struct vm_struct *v,
> > > > > > + unsigned int *counters)
> > > > > > +{
> > > > > > + unsigned int nr;
> > > > > > + unsigned int step = 1U << vm_area_page_order(v);
> > > > > >
> > > > > > - if (v->flags & VM_UNINITIALIZED)
> > > > > > - return;
> > > > > > - /* Pair with smp_wmb() in clear_vm_uninitialized_flag() */
> > > > > > - smp_rmb();
> > > > > > + if (!counters)
> > > > > > + return;
> > > > > >
> > > > > > - memset(counters, 0, nr_node_ids * sizeof(unsigned int));
> > > > > > + memset(counters, 0, nr_node_ids * sizeof(unsigned int));
> > > > > >
> > > > > > - for (nr = 0; nr < v->nr_pages; nr += step)
> > > > > > - counters[page_to_nid(v->pages[nr])] += step;
> > > > > > - for_each_node_state(nr, N_HIGH_MEMORY)
> > > > > > - if (counters[nr])
> > > > > > - seq_printf(m, " N%u=%u", nr, counters[nr]);
> > > > > > - }
> > > > > > + for (nr = 0; nr < v->nr_pages; nr += step)
> > > > > > + counters[page_to_nid(v->pages[nr])] += step;
> > > > > > + for_each_node_state(nr, N_HIGH_MEMORY)
> > > > > > + if (counters[nr])
> > > > > > + seq_printf(m, " N%u=%u", nr, counters[nr]);
> > > > > > }
> > > > > >
> > > > > > static void show_purge_info(struct seq_file *m)
> > > > > > @@ -4962,8 +4963,12 @@ static int vmalloc_info_show(struct seq_file *m, void *p)
> > > > > > struct vmap_node *vn;
> > > > > > struct vmap_area *va;
> > > > > > struct vm_struct *v;
> > > > > > + unsigned int *counters;
> > > > > > int i;
> > > > > >
> > > > > > + if (IS_ENABLED(CONFIG_NUMA))
> > > > > > + counters = kmalloc(nr_node_ids * sizeof(unsigned int), GFP_KERNEL);
> > > > > > +
> > > > > > for (i = 0; i < nr_vmap_nodes; i++) {
> > > > > > vn = &vmap_nodes[i];
> > > > > >
> > > > > > @@ -4979,6 +4984,11 @@ static int vmalloc_info_show(struct seq_file *m, void *p)
> > > > > > }
> > > > > >
> > > > > > v = va->vm;
> > > > > > + if (v->flags & VM_UNINITIALIZED)
> > > > > > + continue;
> > > > > > +
> > > > > > + /* Pair with smp_wmb() in clear_vm_uninitialized_flag() */
> > > > > > + smp_rmb();
> > > > > >
> > > > > > seq_printf(m, "0x%pK-0x%pK %7ld",
> > > > > > v->addr, v->addr + v->size, v->size);
> > > > > > @@ -5013,7 +5023,9 @@ static int vmalloc_info_show(struct seq_file *m, void *p)
> > > > > > if (is_vmalloc_addr(v->pages))
> > > > > > seq_puts(m, " vpages");
> > > > > >
> > > > > > - show_numa_info(m, v);
> > > > > > + if (IS_ENABLED(CONFIG_NUMA))
> > > > > > + show_numa_info(m, v, counters);
> > > > > > +
> > > > > > seq_putc(m, '\n');
> > > > > > }
> > > > > > spin_unlock(&vn->busy.lock);
> > > > > > @@ -5023,19 +5035,14 @@ static int vmalloc_info_show(struct seq_file *m, void *p)
> > > > > > * As a final step, dump "unpurged" areas.
> > > > > > */
> > > > > > show_purge_info(m);
> > > > > > + if (IS_ENABLED(CONFIG_NUMA))
> > > > > > + kfree(counters);
> > > > > > return 0;
> > > > > > }
> > > > > >
> > > > > > static int __init proc_vmalloc_init(void)
> > > > > > {
> > > > > > - void *priv_data = NULL;
> > > > > > -
> > > > > > - if (IS_ENABLED(CONFIG_NUMA))
> > > > > > - priv_data = kmalloc(nr_node_ids * sizeof(unsigned int), GFP_KERNEL);
> > > > > > -
> > > > > > - proc_create_single_data("vmallocinfo",
> > > > > > - 0400, NULL, vmalloc_info_show, priv_data);
> > > > > > -
> > > > > > + proc_create_single("vmallocinfo", 0400, NULL, vmalloc_info_show);
> > > > > > return 0;
> > > > > > }
> > > > > > module_init(proc_vmalloc_init);
> > > > > > --
> > > > > LGTM:
> > > > >
> > > > > Reviewed-by: "Uladzislau Rezki (Sony)" <urezki@...il.com>
> > > > >
> > > > > Thank you!
> > > > >
> > > > Also one thing i have just noticed. The "Fixes tag" should be updated to
> > > > this: Fixes: 8e1d743f2c26 ("mm: vmalloc: support multiple nodes in vmallocinfo")
> > > >
> > > > the below one should not be blamed:
> > > >
> > > > Fixes: a47a126ad5ea ("vmallocinfo: add NUMA information")
> > > >
> > > > Thanks!
> > > >
> > > > --
> > > > Uladzislau Rezki
> > >
> > > I've been looking through the stable tree and it seems like this issue
> > > still exists in the stable tree, so I think this patch needs to be
> > > backported.
> > >
> > > What do you think?
> > >
> > If the stable does not have it, we should backport it. But it has to be
> > done only for those where a vmap-node logic is in place. It was delivered
> > starting from the v6.9 kernel version.
> >
>
> Then I guess I should backport to 6.12, 6.15. Thanks for letting me know!
>
You are welcome!
--
Uladzislau Rezki
Powered by blists - more mailing lists