[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200520162400.GA8800@lenoir>
Date: Wed, 20 May 2020 18:24:02 +0200
From: Frederic Weisbecker <frederic@...nel.org>
To: Juri Lelli <juri.lelli@...hat.com>
Cc: fweisbec@...il.com, tglx@...utronix.de, mingo@...nel.org,
linux-rt-users@...r.kernel.org, peterz@...radead.org,
linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH] tick/sched: update full_nohz status after SCHED dep
is cleared
Hi Juri,
On Wed, May 20, 2020 at 04:04:02PM +0200, Juri Lelli wrote:
> After tasks enter or leave a runqueue (wakeup/block) SCHED full_nohz
> dependency is checked (via sched_update_tick_dependency()). In case tick
> can be stopped on a CPU (see sched_can_stop_tick() for details), SCHED
> dependency for such CPU is cleared. However, this new information is not
> used right away to actually stop the tick.
>
> In CONFIG_PREEMPT systems booted with threadirqs option, sched clock
> tick is serviced by an actual task (ksoftirqd corresponding to the CPU
> where tick timer fired).
I must confess I haven't tested threaded IRQs but I was
pretty sure that the timer tick is always serviced on hardirq.
Now the timer list callbacks are executed on softirqs. So if the
tick itself, executed on hardirq, sees pending timers, it raise
the softirq which wakes up ksoftirqd on forced irq thread mode
while calling irq_exit(). Then tick_nohz_irq_exit() sees ksoftirqd
and the current task on the runqueue, which together can indeed prevent
from turning off the tick. But then the root cause is pending timer
list callbacks.
> So, in case a CPU was running a single task,
> servicing the timer involves exiting full nozh mode. Problem at this
> point is that we might lose chances to enter back into full nozh mode,
> since info about ksoftirqd thread going back to sleep is not used (as
> mentioned above).
It should enter into nohz_full mode in the next tick, which is usually
a reasonable delay. If you need guarantee that the tick is stopped before
resuming userspace, you need a stronger machinery such as the task isolation
patchset.
Let's have a look at the trace below:
> ksoftirqd/19-125 [019] 170.700754: softirq_entry: vec=1 [action=TIMER]
> ksoftirqd/19-125 [019] 170.700755: softirq_exit: vec=1 [action=TIMER]
> ksoftirqd/19-125 [019] 170.700756: softirq_entry: vec=7 [action=SCHED]
> ksoftirqd/19-125 [019] 170.700757: softirq_exit: vec=7 [action=SCHED]
> ksoftirqd/19-125 [019] 170.700759: sched_switch: ksoftirqd/19:125 [120] S ==> sysjitter:2459 [120]
> sysjitter-2459 [019] 170.701740: local_timer_entry: vector=236
> sysjitter-2459 [019] 170.701742: softirq_raise: vec=1 [action=TIMER]
See here the tick sees pending timer callbacks so it raises the timer softirq.
> sysjitter-2459 [019] 170.701743: softirq_raise: vec=7 [action=SCHED]
Oh and the scheduler tick activates the scheduler softirq as well.
> sysjitter-2459 [019] 170.701744: local_timer_exit: vector=236
> sysjitter-2459 [019] 170.701747: sched_wakeup: ksoftirqd/19:125 [120] success=1 CPU:019
> sysjitter-2459 [019] 170.701748: tick_stop: success=0 dependency=SCHED
> sysjitter-2459 [019] 170.701749: irq_work_entry: vector=246
> sysjitter-2459 [019] 170.701750: irq_work_exit: vector=246
> sysjitter-2459 [019] 170.701751: tick_stop: success=0 dependency=SCHED
> sysjitter-2459 [019] 170.701753: sched_switch: sysjitter:2459 [120] R ==> ksoftirqd/19:125 [120]
> ksoftirqd/19-125 [019] 170.701754: softirq_entry: vec=1 [action=TIMER]
> ksoftirqd/19-125 [019] 170.701756: softirq_exit: vec=1 [action=TIMER]
> ksoftirqd/19-125 [019] 170.701756: softirq_entry: vec=7 [action=SCHED]
> ksoftirqd/19-125 [019] 170.701758: softirq_exit: vec=7 [action=SCHED]
> ksoftirqd/19-125 [019] 170.701759: sched_switch: ksoftirqd/19:125 [120] S ==> sysjitter:2459 [120]
> sysjitter-2459 [019] 170.702740: local_timer_entry: vector=236
> sysjitter-2459 [019] 170.702742: softirq_raise: vec=1 [action=TIMER]
A new tick but we still have pending timer callback so we'll need to wakeup ksoftirqd
again.
I think you should trace timers and check which one is concerned here.
Thanks.
Powered by blists - more mailing lists