[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180109152234.GU9671@linux.vnet.ibm.com>
Date: Tue, 9 Jan 2018 07:22:34 -0800
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Tejun Heo <tj@...nel.org>
Cc: linux-kernel@...r.kernel.org, kernel-team@...com
Subject: Re: Can RCU stall lead to hard lockups?
On Tue, Jan 09, 2018 at 06:11:14AM -0800, Tejun Heo wrote:
> Hello, Paul.
>
> On Mon, Jan 08, 2018 at 08:24:25PM -0800, Paul E. McKenney wrote:
> > > I don't know the RCU code at all but it *looks* like the first CPU is
> > > taking a sweet while flushing printk buffer while holding a lock (the
> > > console is IPMI serial console, which faithfully emulates 115200 baud
> > > rate), and everyone else seems stuck waiting for that spinlock in
> > > rcu_check_callbacks().
> > >
> > > Does this sound possible?
> >
> > 115200 baud? Ouch!!! That -will- result in trouble from console
> > printing, and often also in RCU CPU stall warnings.
>
> It could even be slower than 115200, and we occassionally see RCU
> stall warnings caused by printk storms, for example, while the kernel
> is trying to dump a lot of info after an OOM. That's an issue we
> probably want to improve from printk side; however, they don't usually
> lead to NMI hard lockup detector kicking in and crashing the machine,
> which is the peculiarity here.
>
> Hmmm... show_state_filter(), the function which dumps all task
> backtraces, share a similar problem and it avoids it by explicitly
> calling touch_nmi_watchdog(). Maybe we can do something like the
> following from RCU too?
If this fixes things for you, I would welcome such a patch.
Thanx, Paul
> diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
> index db85ca3..3c4c4d3 100644
> --- a/kernel/rcu/tree_plugin.h
> +++ b/kernel/rcu/tree_plugin.h
> @@ -561,8 +561,14 @@ static void rcu_print_detail_task_stall_rnp(struct rcu_node *rnp)
> }
> t = list_entry(rnp->gp_tasks->prev,
> struct task_struct, rcu_node_entry);
> - list_for_each_entry_continue(t, &rnp->blkd_tasks, rcu_node_entry)
> + list_for_each_entry_continue(t, &rnp->blkd_tasks, rcu_node_entry) {
> + touch_nmi_watchdog();
> + /*
> + * We could be printing a lot of these messages while
> + * holding a spinlock. Avoid triggering hard lockup.
> + */
> sched_show_task(t);
> + }
> raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
> }
>
> @@ -1678,6 +1684,12 @@ static void print_cpu_stall_info(struct rcu_state *rsp, int cpu)
> char *ticks_title;
> unsigned long ticks_value;
>
> + /*
> + * We could be printing a lot of these messages while holding a
> + * spinlock. Avoid triggering hard lockup.
> + */
> + touch_nmi_watchdog();
> +
> if (rsp->gpnum == rdp->gpnum) {
> ticks_title = "ticks this GP";
> ticks_value = rdp->ticks_this_gp;
>
Powered by blists - more mailing lists