lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 24 Nov 2020 19:43:08 +0000
From:   Mark Rutland <mark.rutland@....com>
To:     "Paul E. McKenney" <paulmck@...nel.org>
Cc:     Marco Elver <elver@...gle.com>, Will Deacon <will@...nel.org>,
        Steven Rostedt <rostedt@...dmis.org>,
        Anders Roxell <anders.roxell@...aro.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Alexander Potapenko <glider@...gle.com>,
        Dmitry Vyukov <dvyukov@...gle.com>,
        Jann Horn <jannh@...gle.com>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Linux-MM <linux-mm@...ck.org>,
        kasan-dev <kasan-dev@...glegroups.com>, rcu@...r.kernel.org,
        Peter Zijlstra <peterz@...radead.org>,
        Tejun Heo <tj@...nel.org>,
        Lai Jiangshan <jiangshanlai@...il.com>,
        linux-arm-kernel@...ts.infradead.org, boqun.feng@...il.com,
        tglx@...utronix.de
Subject: Re: linux-next: stall warnings and deadlock on Arm64 (was: [PATCH]
 kfence: Avoid stalling...)

On Tue, Nov 24, 2020 at 07:01:46AM -0800, Paul E. McKenney wrote:
> On Tue, Nov 24, 2020 at 03:03:10PM +0100, Marco Elver wrote:
> > [   91.184432] =============================
> > [   91.188301] WARNING: suspicious RCU usage
> > [   91.192316] 5.10.0-rc4-next-20201119-00002-g51c2bf0ac853 #25 Tainted: G        W        
> > [   91.197536] -----------------------------
> > [   91.201431] kernel/trace/trace_preemptirq.c:78 RCU not watching trace_hardirqs_off()!
> > [   91.206546] 
> > [   91.206546] other info that might help us debug this:
> > [   91.206546] 
> > [   91.211790] 
> > [   91.211790] rcu_scheduler_active = 2, debug_locks = 0
> > [   91.216454] RCU used illegally from extended quiescent state!
> > [   91.220890] no locks held by swapper/0/0.
> > [   91.224712] 
> > [   91.224712] stack backtrace:
> > [   91.228794] CPU: 0 PID: 0 Comm: swapper/0 Tainted: G        W         5.10.0-rc4-next-20201119-00002-g51c2bf0ac853 #25
> > [   91.234877] Hardware name: linux,dummy-virt (DT)
> > [   91.239032] Call trace:
> > [   91.242587]  dump_backtrace+0x0/0x240
> > [   91.246500]  show_stack+0x34/0x88
> > [   91.250295]  dump_stack+0x140/0x1bc
> > [   91.254159]  lockdep_rcu_suspicious+0xe4/0xf8
> > [   91.258332]  trace_hardirqs_off+0x214/0x330
> > [   91.262462]  trace_graph_return+0x1ac/0x1d8
> > [   91.266564]  ftrace_return_to_handler+0xa4/0x170
> > [   91.270809]  return_to_handler+0x1c/0x38
> > [   91.274826]  default_idle_call+0x94/0x38c
> > [   91.278869]  do_idle+0x240/0x290
> > [   91.282633]  rest_init+0x1e8/0x2dc
> > [   91.286529]  arch_call_rest_init+0x1c/0x28
> > [   91.290585]  start_kernel+0x638/0x670

> This looks like tracing in the idle loop in a place where RCU is not
> watching.  Historically, this has been addressed by using _rcuidle()
> trace events, but the portion of the idle loop that RCU is watching has
> recently increased.  Last I checked, there were still a few holdouts (that
> would splat like this) in x86, though perhaps those have since been fixed.

Yup! I think this is a latent issue my debug hacks revealed (in addition
to a couple of other issues in the idle path), and still affects x86 and
others. It's only noticeable if you hack trace_hardirqs_{on,off}() to
check rcu_is_watching(), which I had at the tip of my tree.

AFAICT, the issue is that arch_cpu_idle() can be dynamically traced with
ftrace, and hence the tracing code can unexpectedly run without RCU
watching. Since that's dynamic tracing, we can avoid it by marking
arch_cpu_idle() and friends as noinstr.

I'll see about getting this fixed before we upstream the debug hack.

Thanks,
Mark.

Powered by blists - more mailing lists