lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230809201413.GA3374446@google.com>
Date:   Wed, 9 Aug 2023 20:14:13 +0000
From:   Joel Fernandes <joel@...lfernandes.org>
To:     Guenter Roeck <linux@...ck-us.net>
Cc:     Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        stable@...r.kernel.org, patches@...ts.linux.dev,
        linux-kernel@...r.kernel.org, torvalds@...ux-foundation.org,
        akpm@...ux-foundation.org, shuah@...nel.org, patches@...nelci.org,
        lkft-triage@...ts.linaro.org, pavel@...x.de, jonathanh@...dia.com,
        f.fainelli@...il.com, sudipm.mukherjee@...il.com,
        srw@...dewatkins.net, rwarsow@....de, conor@...nel.org,
        paulmck@...nel.org
Subject: Re: [PATCH 5.15 00/92] 5.15.126-rc1 review

On Wed, Aug 09, 2023 at 12:25:48PM -0700, Guenter Roeck wrote:
> On Wed, Aug 09, 2023 at 02:35:59PM -0400, Joel Fernandes wrote:
> > On Wed, Aug 9, 2023 at 12:18 PM Guenter Roeck <linux@...ck-us.net> wrote:
> > >
> > > On 8/9/23 06:53, Joel Fernandes wrote:
> > > > On Wed, Aug 09, 2023 at 12:40:36PM +0200, Greg Kroah-Hartman wrote:
> > > >> This is the start of the stable review cycle for the 5.15.126 release.
> > > >> There are 92 patches in this series, all will be posted as a response
> > > >> to this one.  If anyone has any issues with these being applied, please
> > > >> let me know.
> > > >>
> > > >> Responses should be made by Fri, 11 Aug 2023 10:36:10 +0000.
> > > >> Anything received after that time might be too late.
> > > >>
> > > >> The whole patch series can be found in one patch at:
> > > >>      https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.15.126-rc1.gz
> > > >> or in the git tree and branch at:
> > > >>      git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.15.y
> > > >> and the diffstat can be found below.
> > > >
> > > > Not necesscarily new with 5.15 stable but 3 of the 19 rcutorture scenarios
> > > > hang with this -rc: TREE04, TREE07, TASKS03.
> > > >
> > > > 5.15 has a known stop machine issue where it hangs after 1.5 hours with cpu
> > > > hotplug rcutorture testing. Me and tglx are continuing to debug this. The
> > > > issue does not show up on anything but 5.15 stable kernels and neither on
> > > > mainline.
> > > >
> > >
> > > Do you by any have a crash pattern that we could possibly use to find the crash
> > > in ChromeOS crash logs ? No idea if that would help, but it could provide some
> > > additional data points.
> > 
> > The pattern shows as a hard hang, the system is unresponsive and all CPUs
> > are stuck in stop_machine. Sometimes it recovers on its own from the
> > hang and then RCU immediately gives stall warnings. It takes 1.5 hour
> > to reproduce and sometimes never happens for several hours.
> > 
> > It appears related to CPU hotplug since gdb showed me most of the CPUs
> > are spinning in multi_cpu_stop() / stop machine after the hang.
> > 
> 
> Hmm, we do see lots of soft lockups with multi_cpu_stop() in the backtrace,
> but not with v5.15.y but with v5.4.y. The actual hang is in stop_machine_yield().

Interesting. It looks similar as far as the stack dump in gdb goes, here are
the stacks I dumped with the hang I referred to:
https://paste.debian.net/1288308/

But in dmesg, it prints nothing for about 20-30 mins before recovering, then
I get RCU stalls. It looks like this:

[  682.721962] kvm-clock: cpu 7, msr 199981c1, secondary cpu clock
[  682.736830] kvm-guest: stealtime: cpu 7, msr 1f5db140
[  684.445875] smpboot: Booting Node 0 Processor 5 APIC 0x5
[  684.467831] kvm-clock: cpu 5, msr 19998141, secondary cpu clock
[  684.555766] kvm-guest: stealtime: cpu 5, msr 1f55b140
[  687.356637] smpboot: Booting Node 0 Processor 4 APIC 0x4
[  687.377214] kvm-clock: cpu 4, msr 19998101, secondary cpu clock
[ 2885.473742] kvm-guest: stealtime: cpu 4, msr 1f51b140
[ 2886.456408] rcu: INFO: rcu_sched self-detected stall on CPU
[ 2886.457590] rcu_torture_fwd_prog_nr: Duration 15423 cver 170 gps 337
[ 2886.464934] rcu: 0-...!: (2 ticks this GP) idle=7eb/0/0x1 softirq=118271/118271 fqs=0 last_accelerate: e3cd/71c0 dyntick_enabled: 1
[ 2886.490837] (t=2199034 jiffies g=185489 q=4)
[ 2886.497297] rcu: rcu_sched kthread timer wakeup didn't happen for 2199031 jiffies! g185489 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402
[ 2886.514201] rcu: Possible timer handling issue on cpu=0 timer-softirq=441616
[ 2886.524593] rcu: rcu_sched kthread starved for 2199034 jiffies! g185489 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402 ->cpu=0
[ 2886.540067] rcu: Unless rcu_sched kthread gets sufficient CPU time, OOM is now expected behavior.
[ 2886.551967] rcu: RCU grace-period kthread stack dump:
[ 2886.558644] task:rcu_sched       state:I stack:14896 pid:   15 ppid:     2 flags:0x00004000
[ 2886.569640] Call Trace:
[ 2886.572940]  <TASK>
[ 2886.575902]  __schedule+0x284/0x6e0
[ 2886.580969]  schedule+0x53/0xa0
[ 2886.585231]  schedule_timeout+0x8f/0x130

In that huge gap, I connect gdb and dumped those stacks in above link.

On 5.15 stable you could repro it in about an hour and a half most of the time by running something like:
tools/testing/selftests/rcutorture/bin/kvm.sh --cpus 48 --duration 60 --configs TREE04

Let me know if you saw anything like this. I am currently trying to panic the
kernel when the hang happens so I can get better traces.

thanks,

 - Joel

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ