lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 12 Dec 2014 16:58:07 -0800
From:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:	Sasha Levin <sasha.levin@...cle.com>
Cc:	David Lang <david@...g.hm>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Dave Jones <davej@...hat.com>, Chris Mason <clm@...com>,
	Mike Galbraith <umgwanakikbuti@...il.com>,
	Ingo Molnar <mingo@...nel.org>,
	Peter Zijlstra <peterz@...radead.org>,
	Dâniel Fraga <fragabr@...il.com>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: frequent lockups in 3.18rc4

On Fri, Dec 12, 2014 at 04:23:56PM -0500, Sasha Levin wrote:
> On 12/12/2014 03:34 PM, Paul E. McKenney wrote:
> > On Fri, Dec 12, 2014 at 11:58:50AM -0800, David Lang wrote:
> >> > On Fri, 12 Dec 2014, Linus Torvalds wrote:
> >> > 
> >>> > >I'm also not sure if the bug ever happens with preemption disabled.
> >>> > >Sasha, was that you who reported that you cannot reproduce it without
> >>> > >preemption? It strikes me that there's a race condition in
> >>> > >__cond_resched() wrt preemption, for example: we do
> >>> > >
> >>> > >       __preempt_count_add(PREEMPT_ACTIVE);
> >>> > >       __schedule();
> >>> > >       __preempt_count_sub(PREEMPT_ACTIVE);
> >>> > >
> >>> > >and in between the __schedule() and __preempt_count_sub(), if an
> >>> > >interrupt comes in and wakes up some important process, it won't
> >>> > >reschedule (because preemption is active), but then we enable
> >>> > >preemption again and don't check whether we should reschedule (again),
> >>> > >and we just go on our merry ways.
> >>> > >
> >>> > >Now, I don't see how that could really matter for a long time -
> >>> > >returning to user space will check need_resched, and sleeping will
> >>> > >obviously force a reschedule anyway, so these kinds of races should at
> >>> > >most delay things by just a tiny amount,
> >> > 
> >> > If the machine has NOHZ and has a cpu bound userspace task, it could
> >> > take quite a while before userspace would trigger a reschedule (at
> >> > least if I've understood the comments on this thread properly)
> > Dave, Sasha, if you guys are running CONFIG_NO_HZ_FULL=y and
> > CONFIG_NO_HZ_FULL_ALL=y, please let me know.  I am currently assuming
> > that none of your CPUs are in NO_HZ_FULL mode.  If this assumption is
> > incorrect, there are some other pieces of RCU that I should be taking
> > a hard look at.
> 
> This is my no_hz related config:
> 
> $ grep NO_HZ .config
> CONFIG_NO_HZ_COMMON=y
> # CONFIG_NO_HZ_IDLE is not set
> CONFIG_NO_HZ_FULL=y
> CONFIG_NO_HZ_FULL_ALL=y
> CONFIG_NO_HZ_FULL_SYSIDLE=y
> CONFIG_NO_HZ_FULL_SYSIDLE_SMALL=8
> CONFIG_NO_HZ=y
> CONFIG_RCU_FAST_NO_HZ=y
> 
> And from dmesg:
> 
> [    0.000000] Preemptible hierarchical RCU implementation.
> [    0.000000]  RCU debugfs-based tracing is enabled.
> [    0.000000]  Hierarchical RCU autobalancing is disabled.
> [    0.000000]  RCU dyntick-idle grace-period acceleration is enabled.
> [    0.000000]  Additional per-CPU info printed with stalls.
> [    0.000000]  RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=28.
> [    0.000000]  RCU kthread priority: 1.
> [    0.000000] RCU: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=28
> [    0.000000] NR_IRQS:524544 nr_irqs:648 16
> [    0.000000] NO_HZ: Clearing 0 from nohz_full range for timekeeping
> [    0.000000] NO_HZ: Full dynticks CPUs: 1-27.
> [    0.000000]  Offload RCU callbacks from CPUs: 1-27.

Thank you, Sasha.  Looks like I have a few more places to take a hard
look at, then!

							Thanx, Paul

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists