[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <81dd7e5e-89be-2ff9-525e-7095e934baa5@linaro.org>
Date: Thu, 10 Aug 2017 11:45:09 +0200
From: Daniel Lezcano <daniel.lezcano@...aro.org>
To: paulmck@...ux.vnet.ibm.com
Cc: Pratyush Anand <panand@...hat.com>,
김동현 <austinkernel.kim@...il.com>,
john.stultz@...aro.org, Steven Rostedt <rostedt@...dmis.org>,
linux-kernel@...r.kernel.org
Subject: Re: RCU stall when using function_graph
On 09/08/2017 19:22, Paul E. McKenney wrote:
> On Wed, Aug 09, 2017 at 05:51:33PM +0200, Daniel Lezcano wrote:
>> On 09/08/2017 16:40, Paul E. McKenney wrote:
>>> On Wed, Aug 09, 2017 at 03:28:05PM +0200, Daniel Lezcano wrote:
>>>> On 09/08/2017 14:58, Paul E. McKenney wrote:
>>>>> On Wed, Aug 09, 2017 at 02:43:49PM +0530, Pratyush Anand wrote:
>>>>>>
>>>>>>
>>>>>> On Sunday 06 August 2017 10:32 PM, Paul E. McKenney wrote:
>>>>>>> On Sat, Aug 05, 2017 at 02:24:21PM +0900, 김동현 wrote:
>>>>>>>> Dear All
>>>>>>>>
>>>>>>>> As for me, after configuring function_graph as below, crash disappears.
>>>>>>>> "echo 0 > d/tracing/tracing_on"
>>>>>>>> "sleep 1"
>>>>>>>>
>>>>>>>> "echo function_graph > d/tracing/current_tracer"
>>>>>>>> "sleep 1"
>>>>>>>>
>>>>>>>> "echo smp_call_function_single > d/tracing/set_ftrace_filter"
>>>>>>
>>>>>> It will limit trace output to only for the filtered function
>>>>>> (smp_call_function_single).
>>>>>>
>>>>>>>> adb shell "sleep 1"
>>>>>>>>
>>>>>>>> "echo 1 > d/tracing/tracing_on"
>>>>>>>> adb shell "sleep 1"
>>>>>>>>
>>>>>>>> Right after function_graph is enabled, too many logs are traced upon IRQ
>>>>>>>> transaction which many times eventually causes stall.
>>>>>>>
>>>>>>> That would do it!
>>>>>>>
>>>>>>> Hmmm...
>>>>>>>
>>>>>>> Steven, would it be helpful if RCU were to inform tracing (say) halfway
>>>>>>> through the RCU CPU stall interval, allowing the tracer to do something
>>>>>>> like cond_resched_rcu_qs()? I can imagine all sorts of reasons why this
>>>>>>> wouldn't work, for example, if all the tracing was with irqs disabled
>>>>>>> or some such, but figured I should ask.
>>>>>>>
>>>>>>> Does Guillermo's approach work for others?
>>>>>>
>>>>>> Limited output with a couple of filtered function will definitely
>>>>>> not cause RCU schedule stall. But the question is whether we should
>>>>>> expect a full function graph trace working on every platform or not
>>>>>> (specially the one which generates high interrupts)?
>>>>>
>>>>> It might well be that the user must disable RCU CPU stall warnings via
>>>>> the rcu_cpu_stall_suppress sysfs entry (or increase their timeout via th
>>>>> rcu_cpu_stall_timeout sysfs entry) before doing something that greatly
>>>>> increases overhead. Like enabling large quantities of tracing. ;-)
>>>>>
>>>>> It -might- be possible to do this automatically, but reliable
>>>>> automation would require that tracing understand how often each
>>>>> function was called, which sounds to me to be a bit of a stretch.
>>>>>
>>>>> Thoughts?
>>>>
>>>> A random thought:
>>>>
>>>> Is it possible to have a mid-timeout happening and store some
>>>> information like the instruction pointer, so when the timeout happen we
>>>> can compare if there was some progress, if yes, very likely, system
>>>> performance collapsed and we are not fast enough.
>>>
>>> RCU already does take various actions for an impending stall, so something
>>> could be done. But in most slowdowns, the instruction pointer will be
>>> changing rapidly, just not as rapidly as it would normally. So exactly
>>> how would the forward-progress comparison be carried out?
>>>
>>> It would be easy to set up a notifier, so that if any notifier in the
>>> chain returned an error, stall warnings would be suppressed. It would
>>> be harder to figure out when to re-enable them, though I suppose that
>>> they could be suppressed only for the duration of the current grace
>>> period or some such.
>>>
>>> But what exactly would you use such a notifier for?
>>>
>>> Or am I misunderstanding your suggestion?
>>
>> Well, may be the instruction pointer thing is not a good idea.
>>
>> I learnt from this experience, an overloaded kernel with a lot of
>> interrupts can hang the console and issue RCU stall.
>>
>> However, someone else can face the same situation. Even if he reads the
>> RCU/stallwarn.txt documentation, it will be hard to figure out the issue.
>>
>> A message telling the grace period can't be reached because we are too
>> busy processing interrupts would have helped but I understand it is not
>> easy to implement.
>>
>> Perhaps, adding a new bullet in the documentation can help:
>>
>> "If the interrupt processing time is longer than the interval between
>> each interrupt, the CPU will keep processing the interrupts without
>> allowing the RCU's grace period kthread. This situation can happen if
>> there is a highly rated number of interrupts and the function_graph
>> tracer is enabled".
>
> How about this?
Yes, it is clear. Thanks for reformulating it.
> Any other debugging options that should be called out? I bet that
> the function_graph tracer isn't the only way to make this happen.
Nothing coming in mind but may be worth to mention the slowness of the
CPU is the aggravating factor. In particular I was able to reproduce the
issue by setting to the min CPU frequency. With the ondemand governor,
we can have the frequency high (hence enough CPU power) at the moment we
set the function_graph because another CPU is loaded (and both CPUs are
sharing the same clock line). The system became stuck at the moment the
other CPU went idle with the lowest frequency. That introduced
randomness in the issue and made hard to figure out why the RCU stall
was happening.
-- Daniel
> ------------------------------------------------------------------------
>
> commit 8b12d9919f59fe6855429e2eacc1c2e03ecdfe96
> Author: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
> Date: Wed Aug 9 10:16:29 2017 -0700
>
> documentation: Long-running irq handlers can stall RCU grace periods
>
> If a periodic interrupt's handler takes longer to execute than the period
> between successive interrupts, RCU's kthreads and softirq handlers can
> be prevented from executing, resulting in otherwise inexplicable RCU
> CPU stall warnings. This commit therefore calls out this possibility
> in Documentation/RCU/stallwarn.txt.
>
> Reported-by: Daniel Lezcano <daniel.lezcano@...aro.org>
> Signed-off-by: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
>
> diff --git a/Documentation/RCU/stallwarn.txt b/Documentation/RCU/stallwarn.txt
> index 96a3d81837e1..21b8913acbdf 100644
> --- a/Documentation/RCU/stallwarn.txt
> +++ b/Documentation/RCU/stallwarn.txt
> @@ -40,7 +40,9 @@ o Booting Linux using a console connection that is too slow to
> o Anything that prevents RCU's grace-period kthreads from running.
> This can result in the "All QSes seen" console-log message.
> This message will include information on when the kthread last
> - ran and how often it should be expected to run.
> + ran and how often it should be expected to run. It can also
> + result in the "rcu_.*kthread starved for" console-log message,
> + which will include additional debugging information.
>
> o A CPU-bound real-time task in a CONFIG_PREEMPT kernel, which might
> happen to preempt a low-priority task in the middle of an RCU
> @@ -60,6 +62,14 @@ o A CPU-bound real-time task in a CONFIG_PREEMPT_RT kernel that
> CONFIG_PREEMPT_RCU case, you might see stall-warning
> messages.
>
> +o A periodic interrupt whose handler takes longer than the time
> + interval between successive pairs of interrupts. This can
> + prevent RCU's kthreads and softirq handlers from running.
> + Note that certain high-overhead debugging options, for example
> + the function_graph tracer, can result in interrupt handler taking
> + considerably longer than normal, which can in turn result in
> + RCU CPU stall warnings.
> +
> o A hardware or software issue shuts off the scheduler-clock
> interrupt on a CPU that is not in dyntick-idle mode. This
> problem really has happened, and seems to be most likely to
>
--
<http://www.linaro.org/> Linaro.org │ Open source software for ARM SoCs
Follow Linaro: <http://www.facebook.com/pages/Linaro> Facebook |
<http://twitter.com/#!/linaroorg> Twitter |
<http://www.linaro.org/linaro-blog/> Blog
Powered by blists - more mailing lists