lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z5jK3juSZcDYc7bA@pc636>
Date: Tue, 28 Jan 2025 13:17:34 +0100
From: Uladzislau Rezki <urezki@...il.com>
To: "Paul E. McKenney" <paulmck@...nel.org>
Cc: Uladzislau Rezki <urezki@...il.com>, Boqun Feng <boqun.feng@...il.com>,
	RCU <rcu@...r.kernel.org>, LKML <linux-kernel@...r.kernel.org>,
	Frederic Weisbecker <frederic@...nel.org>,
	Cheung Wall <zzqq0103.hey@...il.com>,
	Neeraj upadhyay <Neeraj.Upadhyay@....com>,
	Joel Fernandes <joel@...lfernandes.org>,
	Oleksiy Avramchenko <oleksiy.avramchenko@...y.com>
Subject: Re: [PATCH 2/4] torture: Remove CONFIG_NR_CPUS configuration

> > > with 4 CPUs inside VM :)
> > > 
> > And when running 16 instances with 4 CPUs each i can reproduce the
> > splat which has been reported:
> > 
> > tools/testing/selftests/rcutorture/bin/kvm.sh --allcpus --configs \
> >   '16*TREE05' --memory 10G --bootargs 'rcutorture.fwd_progress=1' \
> >   --kconfig "CONFIG_NR_CPUS=4"
> > 
> > <snip>
> > ...
> > [    0.595251] ------------[ cut here ]------------
> > [    0.595867] A full grace period is not passed yet: 0
> > [    0.595875] WARNING: CPU: 1 PID: 16 at kernel/rcu/tree.c:1617 rcu_sr_normal_complete+0xa9/0xc0
> > [    0.598248] Modules linked in:
> > [    0.598649] CPU: 1 UID: 0 PID: 16 Comm: rcu_preempt Not tainted 6.13.0-02530-g8950af6a11ff #261
> > [    0.599248] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
> > [    0.600248] RIP: 0010:rcu_sr_normal_complete+0xa9/0xc0
> > [    0.600913] Code: 48 29 c2 48 8d 04 0a ba 03 00 00 00 48 39 c2 79 0c 48 83 e8 04 48 c1 e8 02 48 8d 70 02 48 c7 c7 20 e9 33 b5 e8 d8 03 f4 ff 90 <0f> 0b 90 90 48 8d 7b 10 5b e9 f9 38 fb ff 66 0f 1f 84 00 00 00 00
> > [    0.603249] RSP: 0018:ffffadad0008be60 EFLAGS: 00010282
> > [    0.603925] RAX: 0000000000000000 RBX: ffffadad00013d10 RCX: 00000000ffffdfff
> > [    0.605247] RDX: 0000000000000000 RSI: ffffadad0008bd10 RDI: 0000000000000001
> > [    0.606247] RBP: 0000000000000000 R08: 0000000000009ffb R09: 00000000ffffdfff
> > [    0.607248] R10: 00000000ffffdfff R11: ffffffffb56789a0 R12: 0000000000000005
> > [    0.608247] R13: 0000000000031a40 R14: fffffffffffffb74 R15: 0000000000000000
> > [    0.609250] FS:  0000000000000000(0000) GS:ffff9081f5c80000(0000) knlGS:0000000000000000
> > [    0.610249] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> > [    0.611248] CR2: 0000000000000000 CR3: 00000002f024a000 CR4: 00000000000006f0
> > [    0.612249] Call Trace:
> > [    0.612574]  <TASK>
> > [    0.612854]  ? __warn+0x8c/0x190
> > [    0.613248]  ? rcu_sr_normal_complete+0xa9/0xc0
> > [    0.613840]  ? report_bug+0x164/0x190
> > [    0.614248]  ? handle_bug+0x54/0x90
> > [    0.614705]  ? exc_invalid_op+0x17/0x70
> > [    0.615248]  ? asm_exc_invalid_op+0x1a/0x20
> > [    0.615797]  ? rcu_sr_normal_complete+0xa9/0xc0
> > [    0.616248]  rcu_gp_cleanup+0x403/0x5a0
> > [    0.616248]  ? __pfx_rcu_gp_kthread+0x10/0x10
> > [    0.616818]  rcu_gp_kthread+0x136/0x1c0
> > [    0.617249]  kthread+0xec/0x1f0
> > [    0.617664]  ? __pfx_kthread+0x10/0x10
> > [    0.618156]  ret_from_fork+0x2f/0x50
> > [    0.618728]  ? __pfx_kthread+0x10/0x10
> > [    0.619216]  ret_from_fork_asm+0x1a/0x30
> > [    0.620251]  </TASK>
> > ...
> > <snip>
> > 
> > Linus tip-tree, HEAD is c4b9570cfb63501638db720f3bee9f6dfd044b82
> 
> Very good!  And of course, the next question is "does going to _full()
> make the problem go away?"  ;-)
> 
Yes does its job if i apply:

https://lore.kernel.org/rcu/00900afe-ac4e-4362-a3f9-d65f2c9dcd9a@paulmck-laptop/T/#m5d9263f3825d3170c044beedbae741717702d4aa

after that i am not able to reproduce the warning anymore. Tested over
night. Without it, i can reproduce it pretty easy :)

--
Uladzislau Rezki

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ