lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 28 Nov 2018 23:24:26 +0100
From:   Arnd Bergmann <arnd@...db.de>
To:     Steven Rostedt <rostedt@...dmis.org>
Cc:     Anders Roxell <anders.roxell@...aro.org>,
        Ingo Molnar <mingo@...hat.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Dmitry Vyukov <dvyukov@...gle.com>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 2/2] kernel/trace: fix watchdog soft lockup

On Wed, Nov 28, 2018 at 3:09 PM Steven Rostedt <rostedt@...dmis.org> wrote:
>
> On Wed, 28 Nov 2018 09:13:34 +0100
> Anders Roxell <anders.roxell@...aro.org> wrote:
>
> > When building a allmodconfig kernel for arm64 and boot that in qemu,
> > CONFIG_FTRACE_STARTUP_TEST gets enabled and that takes time so the
> > watchdog expires and prints out a message like this:
> > 'watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [swapper/0:1]'
> > Each time the function ftrace_replace_code gets called it stays in that
> > functions loop for 41424 times.
> > Rework so that function cond_resched() gets called in the
> > ftrace_replace_code loop.
> >
> > Co-developed-by: Arnd Bergmann <arnd@...db.de>
> > Signed-off-by: Arnd Bergmann <arnd@...db.de>
> > Signed-off-by: Anders Roxell <anders.roxell@...aro.org>
> > ---
> >  kernel/trace/ftrace.c | 4 ++++
> >  1 file changed, 4 insertions(+)
> >
> > diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
> > index 5b4f73e4fd56..3f456921dedf 100644
> > --- a/kernel/trace/ftrace.c
> > +++ b/kernel/trace/ftrace.c
> > @@ -2426,6 +2426,10 @@ void __weak ftrace_replace_code(int enable)
> >
> >       do_for_each_ftrace_rec(pg, rec) {
> >
> > +             /* This loop can take minutes when sanitizers are enabled, so
> > +              * lets make sure we allow RCU processing.
> > +              */
> > +             cond_resched();
> >               if (rec->flags & FTRACE_FL_DISABLED)
> >                       continue;
> >
>
> NACK.  On some architectures this code is run from stop machine. We
> can't call cond_resched() because it may be called with interrupts
> disabled.
>
> This is a weak function. If arm64 has special needs, just copy it in
> the arm64 code.

I think it's currently broken on all architectures that don't already
override it, the problem being that the function is simply too
expensive when all debug options are enabled.

In an ARM64 allmodconfig kernel, there are 41424 records
that we iterate through several times. In an earlier version of the
test, the cond_resched() was only in the loop in
init_trace_selftests(), and I think that is safe and should /mostly/
solve the problem, so maybe Anders can submit that version again.

However, at least trace_selftest_ops() still takes half a minute
to complete in qemu, and that triggers the softlockup watchdog.
trace_selftest_ops() calls ftrace_replace_code() four or five times.

Here is the excerpt with printk times from one of Anders' tests:

[    8.350607] Running postponed tracer tests:
[    8.356045] Testing tracer function:
[   18.932077] ../kernel/trace/ftrace.c:2441, loop_counter: 41424
[   27.454205] ../kernel/trace/ftrace.c:2441, loop_counter: 41424
[   27.462594] PASSED
[   27.462954] Testing dynamic ftrace:
[   28.510903] ../kernel/trace/ftrace.c:2441, loop_counter: 41424
[   28.746934] PASSED
[   28.747469] Testing dynamic ftrace ops #1:
[   32.488427] ../kernel/trace/ftrace.c:2441, loop_counter: 41424
[   32.501864] (1 0 1 0 0)
[   32.502041] (1 1 2 0 0)
[   50.213914] ../kernel/trace/ftrace.c:2441, loop_counter: 41424
[   50.219736] (2 1 3 0 1066085)
[   50.220077] (2 2 4 0 1066100)
[   60.580678] ../kernel/trace/ftrace.c:2441, loop_counter: 41424
[   60.758019] ../kernel/trace/ftrace.c:2441, loop_counter: 41424
[   60.910501] ../kernel/trace/ftrace.c:2441, loop_counter: 41424
[   60.918354] PASSED
[   60.919672] Testing dynamic ftrace ops #2:
[   64.680222] ../kernel/trace/ftrace.c:2441, loop_counter: 41424
[   64.843430] ../kernel/trace/ftrace.c:2441, loop_counter: 41424
[   81.247068] ../kernel/trace/ftrace.c:2441, loop_counter: 41424
[   81.250895] (1 0 1 1033119 0)
[   81.251186] (1 1 2 1033134 0)
[   81.343168] (2 1 3 1 3732)
[   81.344492] (2 2 4 118 3849)
[   89.837665] ../kernel/trace/ftrace.c:2441, loop_counter: 41424
[   89.844371] PASSED
[   89.844719] Testing ftrace recursion:
[   90.890373] ../kernel/trace/ftrace.c:2441, loop_counter: 41424
[   91.042146] ../kernel/trace/ftrace.c:2441, loop_counter: 41424
[   91.048475] PASSED
[   91.048806] Testing ftrace recursion safe:
[   92.091174] ../kernel/trace/ftrace.c:2441, loop_counter: 41424
[   92.242403] ../kernel/trace/ftrace.c:2441, loop_counter: 41424
[   92.249119] PASSED
[   92.249470] Testing ftrace regs(no arch support):
[   93.293605] ../kernel/trace/ftrace.c:2441, loop_counter: 41424
[   93.444942] ../kernel/trace/ftrace.c:2441, loop_counter: 41424
[   93.451738] PASSED
[   93.452300] Testing tracer nop: PASSED
[   93.453288] Testing tracer irqsoff:
[  104.486368] ../kernel/trace/ftrace.c:2441, loop_counter: 41424
[  112.918828] ../kernel/trace/ftrace.c:2441, loop_counter: 41424
[  112.925809] PASSED
[  112.926435] Testing tracer function_graph:
[  123.303248] ../kernel/trace/ftrace.c:2441, loop_counter: 41424
[  132.599763] ../kernel/trace/ftrace.c:2441, loop_counter: 41424
[  132.607614] PASSED

In particular, the test_probe3 in trace_selftest_ops() takes
around 20 seconds, or 482 microseconds per loop iteration
in ftrace_replace_code().
Do you think there is another bug that makes it slower than
expected, or is that a reasonable time that it could take?

       Arnd

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ