[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z5PSeEn_ceFuqbnz@pc636>
Date: Fri, 24 Jan 2025 18:48:40 +0100
From: Uladzislau Rezki <urezki@...il.com>
To: "Paul E. McKenney" <paulmck@...nel.org>
Cc: Uladzislau Rezki <urezki@...il.com>, Boqun Feng <boqun.feng@...il.com>,
RCU <rcu@...r.kernel.org>, LKML <linux-kernel@...r.kernel.org>,
Frederic Weisbecker <frederic@...nel.org>,
Cheung Wall <zzqq0103.hey@...il.com>,
Neeraj upadhyay <Neeraj.Upadhyay@....com>,
Joel Fernandes <joel@...lfernandes.org>,
Oleksiy Avramchenko <oleksiy.avramchenko@...y.com>
Subject: Re: [PATCH 2/4] torture: Remove CONFIG_NR_CPUS configuration
On Fri, Jan 24, 2025 at 09:36:07AM -0800, Paul E. McKenney wrote:
> On Fri, Jan 24, 2025 at 06:21:30PM +0100, Uladzislau Rezki wrote:
> > On Fri, Jan 24, 2025 at 07:45:23AM -0800, Paul E. McKenney wrote:
> > > On Fri, Jan 24, 2025 at 12:41:38PM +0100, Uladzislau Rezki wrote:
> > > > On Thu, Jan 23, 2025 at 12:29:45PM -0800, Paul E. McKenney wrote:
> > > > > On Thu, Jan 23, 2025 at 07:58:26PM +0100, Uladzislau Rezki (Sony) wrote:
> > > > > > This configuration specifies the maximum number of CPUs which
> > > > > > is set to 8. The problem is that it can not be overwritten for
> > > > > > something higher.
> > > > > >
> > > > > > Remove that configuration for TREE05, so it is possible to run
> > > > > > the torture test on as many CPUs as many system has.
> > > > > >
> > > > > > Signed-off-by: Uladzislau Rezki (Sony) <urezki@...il.com>
> > > > >
> > > > > You should be able to override this on the kvm.sh command line by
> > > > > specifying "--kconfig CONFIG_NR_CPUS=128" or whatever number you wish.
> > > > > For example, see the torture.sh querying the system's number of CPUs
> > > > > and then specifying it to a number of tests.
> > > > >
> > > > > Or am I missing something here?
> > > > >
> > > > It took me a while to understand what happens. Apparently there is this
> > > > 8 CPUs limitation. Yes, i can do it manually by passing --kconfig but
> > > > you need to know about that. I have not expected that.
> > > >
> > > > Therefore i removed it from the configuration because i have not found
> > > > a good explanation why we need. It is confusing instead :)
> > >
> > > Right now, if I do a run with --configs "TREE10 14*CFLIST", this will
> > > make use of 20 systems with 80 CPUs each. If you remove that line from
> > > TREE05, won't each instance of TREE05 consume a full system, for a total
> > > of 33 systems? Yes, I could use "--kconfig CONFIG_NR_CPUS=8" on the
> > > command line, but that would affect all the scenarios, not just TREE05.
> > > Including (say) TINY01, where I believe that it would cause kvm.sh
> > > to complain about a Kconfig conflict.
> > >
> > > Hence me not being in favor of this change. ;-)
> > >
> > > Is there another way to make things work for both situations?
> > >
> > OK, i see. Well. I will just go with --kconfig CONFIG_NR_CPUS=foo if i
> > need more CPUs for TREE05.
> >
> > I will not resist, we just drop this patch :)
>
> Thank you!
>
> The bug you are chasing happens when a given synchonize_rcu() interacts
> with RCU readers, correct?
>
Below one:
<snip>
/*
* RCU torture fake writer kthread. Repeatedly calls sync, with a random
* delay between calls.
*/
static int
rcu_torture_fakewriter(void *arg)
{
...
<snip>
> In rcutorture, only the rcu_torture_writer() call to synchronize_rcu()
> interacts with rcu_torture_reader(). So my guess is that running
> many small TREE05 guest OSes would reproduce this bug more quickly.
> So instead of this:
>
> --kconfig CONFIG_NR_CPUS=128
>
> Do this:
>
> --configs "16*TREE05"
>
> Or maybe even this:
>
> --configs "16*TREE05" --kconfig CONFIG_NR_CPUS=4
Thanks for input.
>
> Thoughts?
>
If you mean below splat:
<snip>
[ 32.107748] =============================
[ 32.108512] WARNING: suspicious RCU usage
[ 32.109232] 6.12.0-rc4-dirty #66 Not tainted
[ 32.110058] -----------------------------
[ 32.110817] kernel/events/core.c:13962 RCU-list traversed in non-reader section!!
[ 32.111221] kworker/u34:2 (251) used greatest stack depth: 12112 bytes left
[ 32.112125]
[ 32.112125] other info that might help us debug this:
[ 32.112125]
[ 32.112130]
[ 32.112130] rcu_scheduler_active = 2, debug_locks = 1
[ 32.116039] 3 locks held by cpuhp/1/20:
[ 32.116758] #0: ffffffff93a6a750 (cpu_hotplug_lock){++++}-{0:0}, at: cpuhp_thread_fun+0x50/0x220
[ 32.118410] #1: ffffffff93a6ce00 (cpuhp_state-down){+.+.}-{0:0}, at: cpuhp_thread_fun+0x50/0x220
[ 32.120091] #2: ffffffff93b7eb68 (pmus_lock){+.+.}-{3:3}, at: perf_event_exit_cpu_context+0x32/0x2d0
[ 32.121723]
[ 32.121723] stack backtrace:
[ 32.122413] CPU: 1 UID: 0 PID: 20 Comm: cpuhp/1 Not tainted 6.12.0-rc4-dirty #66
[ 32.123666] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.13.0-1ubuntu1.1 04/01/2014
[ 32.125302] Call Trace:
[ 32.125769] <TASK>
[ 32.126148] dump_stack_lvl+0x83/0xa0
[ 32.126823] lockdep_rcu_suspicious+0x113/0x180
[ 32.127652] perf_event_exit_cpu_context+0x2c4/0x2d0
[ 32.128593] ? __pfx_perf_event_exit_cpu+0x10/0x10
[ 32.129489] perf_event_exit_cpu+0x9/0x10
[ 32.130243] cpuhp_invoke_callback+0x187/0x6e0
[ 32.131065] ? cpuhp_thread_fun+0x50/0x220
[ 32.131800] cpuhp_thread_fun+0x185/0x220
[ 32.132560] ? __pfx_smpboot_thread_fn+0x10/0x10 [ 32.133394] smpboot_thread_fn+0xd8/0x1d0
[ 32.134050] kthread+0xd0/0x100
[ 32.134592] ? __pfx_kthread+0x10/0x10
[ 32.135270] ret_from_fork+0x2f/0x50
[ 32.135896] ? __pfx_kthread+0x10/0x10
[ 32.136610] ret_from_fork_asm+0x1a/0x30
[ 32.137356] </TASK>
[ 32.140997] smpboot: CPU 1 is now offline
<snip>
I reproduced that using:
+rcutorture.nfakewriters=128
+rcutorture.gp_sync=1
+rcupdate.rcu_expedited=0
+rcupdate.rcu_normal=1
+rcutree.rcu_normal_wake_from_gp=1
<snip>
The test script:
for (( i=0; i<$LOOPS; i++ )); do
tools/testing/selftests/rcutorture/bin/kvm.sh --cpus 64 --configs \
'100*TREE05' --memory 20G --bootargs 'rcutorture.fwd_progress=1'
echo "Done $i"
done
i.e. with more nfakewriters.
If you mean the one that has recently reported, i am not able to
reproduce it anyhow :)
--
Uladzislau Rezki
Powered by blists - more mailing lists