lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 17 Jan 2013 13:39:17 -0800
From:	Andrew Morton <akpm@...ux-foundation.org>
To:	Jan Kara <jack@...e.cz>
Cc:	Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
	LKML <linux-kernel@...r.kernel.org>, jslaby@...e.cz
Subject: Re: [PATCH] printk: Avoid softlockups in console_unlock()

On Thu, 17 Jan 2013 22:04:42 +0100
Jan Kara <jack@...e.cz> wrote:

> ...
>
>   So I played a bit with this. To make things easier for me I added
> artificial mdelay(len*10) (effectively simulating console able to print 100
> characters per second) just after call_console_drivers() so that I can
> trigger issues even on a machine easily available to me. Booting actually
> doesn't trigger any problems because there aren't enough things happening
> in parallel on common machine during boot but
>   echo t >/proc/sysrq-trigger &
>   for i in /lib/modules/3.8.0-rc3-0-default/kernel/fs/*/*.ko; do 
>     name=`basename $i`; name=${name%.ko}; modprobe $name
>   done
> easily triggers the problem (as modprobe uses both RCU & IPIs to signal all
> CPUs).
> 
>   Adding
> 	touch_nmi_watchdog();
> 	touch_all_softlockup_watchdogs();
> 	rcu_cpu_stall_reset();

I'm not sure that touch_all_softlockup_watchdogs() is needed? 
touch_nmi_watchdog() itself calls touch_softlockup_watchdog().

If the rcu_cpu_stall_reset() is needed here then presumably it is
needed elsewhere and we should put a call to rcu_cpu_stall_reset() into
(the increasingly misnamed) touch_nmi_watchdog().

>   into the printk loop did stop all the warnings and the machine eventually
> came alive again after finishing printing sysrq-t output (while printing
> the machine was responding to ping but ssh stopped working after a while -
> not sure what was happening but apparently some IO requests weren't
> completing and maybe networking started dropping packets because it
> accumulated too much RCU work).
> 
> So your suggestion seems certainly plausible. I was just wondering
>   a) Above three lines can be pretty expensive on big machines as they
> iterate over all CPUs. So we should probably limit it to once per jiffy or
> something like that?

I guess so - is it hard to test the effects of such a change?  Maybe do
a few MB of printks with the output disabled with `dmesg -n' and see
what effect such a patch has?

If it does need ratelimiting, I'd worry about using jiffies for that. 
If the kernel is spending a long time with interrupts disabled, jiffies
might not be incrementing.  Using the CPU timestamp would be better
(eg, sched_clock()).

>   b) Above three lines can make softlockup detection pretty useless if
> there's enough printk traffic (it would be enough to printk() something
> every 10s or so which can happen with netfilter logging packets or so).

Yes, that is a concern.

> But if we touch the watchdogs only if we spend more than 1 jiffy in the
> console_unlock() we should hopefully touch those watchdogs only in rare
> cases of heavy printk traffic.

yup.  Another option might be to do the touch_nmi_watchdog() only if
there is a "large" amount of data being emitted within
console_unlock().  Like your 1000 character threshold.

> PS: sysrq-t was ~200 KB on that freshly booted machine so on a busy machine
> that would trigger softlockups with 115200 serial console as well (I
> actually do remember seeing it in some customer's reports). So it's not
> just boot. 

Yes, we have hit this before.  Large printks over slow console devices.
See the sad little touch_nmi_watchdog() in lib/show_mem.c, for
example.  I actually thought we'd fixed this in printk itself but it
seems not - the two touch_nmi_watchdog()s in there are for special
cases.

Actually, a bit of grepping for touch_nmi_watchdog() is interesting.  A
number of serial drivers are doing it, which makes me wonder why your
customers weren't saved by that - using the wrong driver, perhaps?  If
we can get this fixed centrally then a lot of those calls should be
removeable.

The presence of all those touch_nmi_watchdog() calls around the place
has implications for the testing of your patch, btw ;)

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists