[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAEXW_YS3hK8Y5TKCPvnNC9fsbmmMvcjx2f-G4uCXX=F2WNz-HQ@mail.gmail.com>
Date: Fri, 28 Jul 2023 21:25:35 -0400
From: Joel Fernandes <joel@...lfernandes.org>
To: paulmck@...nel.org
Cc: Guenter Roeck <linux@...ck-us.net>, Pavel Machek <pavel@...x.de>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, patches@...ts.linux.dev,
linux-kernel@...r.kernel.org, torvalds@...ux-foundation.org,
akpm@...ux-foundation.org, shuah@...nel.org, patches@...nelci.org,
lkft-triage@...ts.linaro.org, jonathanh@...dia.com,
f.fainelli@...il.com, sudipm.mukherjee@...il.com,
srw@...dewatkins.net, rwarsow@....de, conor@...nel.org,
rcu@...r.kernel.org
Subject: Re: [PATCH 6.4 000/227] 6.4.7-rc1 review
On Fri, Jul 28, 2023 at 6:58 PM Paul E. McKenney <paulmck@...nel.org> wrote:
>
> > On Fri, Jul 28, 2023 at 05:17:59PM -0400, Joel Fernandes wrote:
> >
> > On Jul 27, 2023, at 7:18 PM, Joel Fernandes <joel@...lfernandes.org>
> > wrote:
> >
> >
> >
> > On Jul 27, 2023, at 4:33 PM, Paul E. McKenney <paulmck@...nel.org>
> > wrote:
> >
> > On Thu, Jul 27, 2023 at 10:39:17AM -0700, Guenter Roeck wrote:
> >
> > On 7/27/23 09:07, Paul E. McKenney wrote:
> >
> > ...]
> >
> > No. However, (unrelated) in linux-next, rcu tests sometimes result
> > in apparent hangs
> >
> > or long runtime.
> >
> > [ 0.778841] Mount-cache hash table entries: 512 (order: 0, 4096
> > bytes, linear)
> >
> > [ 0.779011] Mountpoint-cache hash table entries: 512 (order: 0,
> > 4096 bytes, linear)
> >
> > [ 0.797998] Running RCU synchronous self tests
> >
> > [ 0.798209] Running RCU synchronous self tests
> >
> > [ 0.912368] smpboot: CPU0: AMD Opteron 63xx class CPU (family:
> > 0x15, model: 0x2, stepping: 0x0)
> >
> > [ 0.923398] RCU Tasks: Setting shift to 2 and lim to 1
> > rcu_task_cb_adjust=1.
> >
> > [ 0.925419] Running RCU-tasks wait API self tests
> >
> > (hangs until aborted). This is primarily with Opteron CPUs, but also
> > with others such as Haswell,
[...]
> > Building
> > x86_64:q35:Icelake-Server:defconfig:preempt:smp4:net,ne2k_pci:efi:me
> > m2G:virtio:cd ... running ......... passed
[...]
> > I freely confess that I am having a hard time imagining what would
> >
> > be CPU dependent in that code. Timing, maybe? Whatever the reason,
> >
> > I am not seeing these failures in my testing.
> >
> > So which of the following Kconfig options is defined in your
> > .config?
> >
> > CONFIG_TASKS_RCU, CONFIG_TASKS_RUDE_RCU, and CONFIG_TASKS_TRACE_RCU.
> >
> > If you have more than one of them, could you please apply this patch
> >
> > and show me the corresponding console output from the resulting
> > hang?
> >
> > FWIW, I am not able to repro this issue either. If a .config can be
> > shared of the problem system, I can try it out to see if it can be
> > reproduced on my side.
> >
> > I do see this now on 5.15 stable:
> >
> >TASKS03 ------- 3089 GPs (0.858056/s)
> >QEMU killed
> >TASKS03 no success message, 64 successful version messages
> >!!! PID 3309783 hung at 3781 vs. 3600 seconds
> >
> > I have not looked too closely yet. The full test artifacts are here:
> >
> > [1]Artifacts of linux-5.15.y 5.15.123 :
> > /tools/testing/selftests/rcutorture/res/2023.07.28-04.00.44 [Jenkins]
> > [2]box.joelfernandes.org
> > [3]apple-touch-icon.png
> >
> > Thanks,
> >
> > - Joel
> >
> > (Apologies if the email is html, I am sending from phone).
>
> Heh. I have a script that runs lynx. Which isn't perfect, but usually
> makes things at least somewhat legible.
Sorry I was too optimistic about the iPhone's capabilities when it
came to mailing list emails.
Here's what I said:
--------------
I do see this now on 5.15 stable:
TASKS03 ------- 3089 GPs (0.858056/s)
QEMU killed
TASKS03 no success message, 64 successful version messages
!!! PID 3309783 hung at 3781 vs. 3600 seconds
Link to full logs/artifacts:
http://box.joelfernandes.org:9080/job/rcutorture_stable/job/linux-5.15.y/lastFailedBuild/artifact/tools/testing/selftests/rcutorture/res/2023.07.28-04.00.44/
----------------
> This looks like the prototypical hard hang with interrupts disabled,
> which could be anywhere in the kernel, including RCU. I am not seeing
> this. but the usual cause when I have seen it in the past was deadlock
> of irq-disabled locks. In one spectacular case, it was a timekeeping
> failure that messed up a CPU-hotplug operation.
>
> If this is reproducible, one trick would be to have a script look at
> the console.log file, and have it do something (NMI? sysrq? something
> else?) to qemu if output ceased for too long.
>
> One way to do this without messing with the rcutorture scripting is to
> grab the qemu-cmd file from this run, and then invoke that file from your
> own script, possibly with suitable modifications to qemu's parameters.
Would it be better to have such monitoring as part of rcutorture
testing itself? Alternatively there is the NMI hardlockup detector
which I believe should also detect such cases and dump stacks.
thanks,
- Joel
>
> Thoughts?
>
> Thanx, Paul
>
> > Cheers,
> > - Joel
> >
> > Thanx, Paul
> >
> > --------------------------------------------------------------------
> > ----
> >
> > commit 709a917710dc01798e01750ea628ece4bfc42b7b
> >
> > Author: Paul E. McKenney <paulmck@...nel.org>
> >
> > Date: Thu Jul 27 13:13:46 2023 -0700
> >
> > rcu-tasks: Add printk()s to localize boot-time self-test hang
> >
> > Currently, rcu_tasks_initiate_self_tests() prints a message and
> > then
> >
> > initiates self tests on up to three different RCU Tasks flavors.
> > If one
> >
> > of the flavors has a grace-period hang, it is not easy to work out
> > which
> >
> > of the three hung. This commit therefore prints a message prior
> > to each
> >
> > individual test.
> >
> > Reported-by: Guenter Roeck <linux@...ck-us.net>
> >
> > Signed-off-by: Paul E. McKenney <paulmck@...nel.org>
> >
> > diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h
> >
> > index 56c470a489c8..427433c90935 100644
> >
> > --- a/kernel/rcu/tasks.h
> >
> > +++ b/kernel/rcu/tasks.h
> >
> > @@ -1981,20 +1981,22 @@ static void test_rcu_tasks_callback(struct
> > rcu_head *rhp)
> >
> > static void rcu_tasks_initiate_self_tests(void)
> >
> > {
> >
> > - pr_info("Running RCU-tasks wait API self tests\n");
> >
> > #ifdef CONFIG_TASKS_RCU
> >
> > + pr_info("Running RCU Tasks wait API self tests\n");
> >
> > tests[0].runstart = jiffies;
> >
> > synchronize_rcu_tasks();
> >
> > call_rcu_tasks(&tests[0].rh, test_rcu_tasks_callback);
> >
> > #endif
> >
> > #ifdef CONFIG_TASKS_RUDE_RCU
> >
> > + pr_info("Running RCU Tasks Rude wait API self tests\n");
> >
> > tests[1].runstart = jiffies;
> >
> > synchronize_rcu_tasks_rude();
> >
> > call_rcu_tasks_rude(&tests[1].rh, test_rcu_tasks_callback);
> >
> > #endif
> >
> > #ifdef CONFIG_TASKS_TRACE_RCU
> >
> > + pr_info("Running RCU Tasks Trace wait API self tests\n");
> >
> > tests[2].runstart = jiffies;
> >
> > synchronize_rcu_tasks_trace();
> >
> > call_rcu_tasks_trace(&tests[2].rh, test_rcu_tasks_callback);
> >
> >References
> >
> > Visible links:
> > 1. http://box.joelfernandes.org:9080/job/rcutorture_stable/job/linux-5.15.y/lastFailedBuild/artifact/tools/testing/selftests/rcutorture/res/2023.07.28-04.00.44/
> > 2. http://box.joelfernandes.org:9080/job/rcutorture_stable/job/linux-5.15.y/lastFailedBuild/artifact/tools/testing/selftests/rcutorture/res/2023.07.28-04.00.44/
> > 3. http://box.joelfernandes.org:9080/job/rcutorture_stable/job/linux-5.15.y/lastFailedBuild/artifact/tools/testing/selftests/rcutorture/res/2023.07.28-04.00.44/
> >
> > Hidden links:
> > 5. http://box.joelfernandes.org:9080/job/rcutorture_stable/job/linux-5.15.y/lastFailedBuild/artifact/tools/testing/selftests/rcutorture/res/2023.07.28-04.00.44/
Powered by blists - more mailing lists