[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20141112135455.GA6895@lerouge>
Date: Wed, 12 Nov 2014 14:54:58 +0100
From: Frederic Weisbecker <fweisbec@...il.com>
To: Viresh Kumar <viresh.kumar@...aro.org>
Cc: Christoph Lameter <cl@...ux.com>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Thomas Gleixner <tglx@...utronix.de>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Gilad Ben-Yossef <gilad@...yossef.com>,
Tejun Heo <tj@...nel.org>,
John Stultz <john.stultz@...aro.org>,
Mike Frysinger <vapier@...too.org>,
Minchan Kim <minchan.kim@...il.com>,
Hakan Akkan <hakanakkan@...il.com>,
Max Krasnyansky <maxk@....qualcomm.com>,
Hugh Dickins <hughd@...gle.com>,
"H. Peter Anvin" <hpa@...or.com>, Ingo Molnar <mingo@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Kevin Hilman <khilman@...aro.org>
Subject: Re: Future of NOHZ full/isolation development (was Re: [NOHZ] Remove
scheduler_tick_max_deferment)
On Wed, Nov 12, 2014 at 11:41:09AM +0530, Viresh Kumar wrote:
> On 11 November 2014 22:45, Frederic Weisbecker <fweisbec@...il.com> wrote:
>
> > Here is a summarized list:
> >
> > * Unbound workqueues affinity (to housekeeper)
> > * Unbound timers affinity (to housekeeper)
> > * 1 Hz residual scheduler tick offlining to housekeeper
> > * Fix some scheduler accounting that don't even work with 1 Hz: cpu load
> > accounting, rt_scale, load balancing, etc...
> > * Lighten the syscall path and get rid of cputime accounting + RCU hooks
> > for people who want isolation + fast syscalls and faults.
> > * Work on non-affinable workqueues
> > * Work on non-affinable timers
> > * ...
>
> + spurious interrupts with NOHZ_FULL on all architectures which break isolation
> but doesn't get caught with traces. Can be observed with this:
>
> diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c
> index 481fa54..91d490d 100644
> --- a/kernel/time/hrtimer.c
> +++ b/kernel/time/hrtimer.c
> @@ -1244,7 +1244,8 @@ void hrtimer_interrupt(struct clock_event_device *dev)
> {
> struct hrtimer_cpu_base *cpu_base = this_cpu_ptr(&hrtimer_bases);
> ktime_t expires_next, now, entry_time, delta;
> - int i, retries = 0;
> + int i, retries = 0, count = 0;
> + static int total_spurious;
>
> BUG_ON(!cpu_base->hres_active);
> cpu_base->nr_events++;
> @@ -1304,10 +1305,14 @@ void hrtimer_interrupt(struct clock_event_device *dev)
> break;
> }
>
> + count++;
> __run_hrtimer(timer, &basenow);
> }
> }
>
> + if (!count)
> + pr_err("____%s: Totalspurious: %d\n", __func__,
> ++total_spurious);
> +
I'd rather leave that to tracepoints. Like trace_hrtimer_spurious().
Or better yet: have trace_hrtimer_interrupt() which we can compare against
trace_hrtimer_expire_entry/exit() to check if any hrtimer callback have run
in the interrupt. This way we avoid workarounds like the above count.
> /*
> * Store the new expiry value so the migration code can verify
> * against it.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists