[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.20.1609080839050.5647@nanos>
Date: Thu, 8 Sep 2016 09:10:51 +0200 (CEST)
From: Thomas Gleixner <tglx@...utronix.de>
To: Binoy Jayan <binoy.jayan@...aro.org>
cc: "Steven Rostedt (Red Hat)" <rostedt@...dmis.org>,
Ingo Molnar <mingo@...hat.com>,
Daniel Wagner <daniel.wagner@...-carit.de>,
Arnd Bergmann <arnd@...db.de>, linux-kernel@...r.kernel.org,
Masami <masami.hiramatsu@...aro.org>
Subject: Re: [PATCH v6 4/4] tracing: Histogram for delayed hrtimer offsets
On Wed, 7 Sep 2016, Binoy Jayan wrote:
> diff --git a/include/linux/hrtimer.h b/include/linux/hrtimer.h
> index 5e00f80..9146842 100644
> --- a/include/linux/hrtimer.h
> +++ b/include/linux/hrtimer.h
> @@ -104,6 +104,9 @@ struct hrtimer {
> struct hrtimer_clock_base *base;
> u8 state;
> u8 is_rel;
> +#ifdef CONFIG_DELAYED_TIMER_OFFSETS_HIST
> + ktime_t praecox;
> +#endif
Throwing a new struct member at some random place optimizes the struct
footprint, right?
And of course documenting new struct members is optional, correct? I'm
really looking forward for the explanation of that variable name.
> +#ifdef CONFIG_DELAYED_TIMER_OFFSETS_HIST
> +TRACE_EVENT(latency_hrtimer_interrupt,
> +
> + TP_PROTO(long long toffset, struct task_struct *task),
> +
> + TP_ARGS(toffset, task),
> +
> + TP_STRUCT__entry(
> + __field(long long, toffset)
> + __array(char, tcomm, TASK_COMM_LEN)
> + __field(int, tprio)
> + ),
> +
> + TP_fast_assign(
> + __entry->toffset = toffset;
> + memcpy(__entry->tcomm, task != NULL ? task->comm : "<none>",
> + task != NULL ? TASK_COMM_LEN : 7);
> + __entry->tprio = task != NULL ? task->prio : -1;
> + ),
What's the value of storing prio? None, if the task is using the DL
scheduler.
> +#ifdef CONFIG_DELAYED_TIMER_OFFSETS_HIST
> +static inline void latency_hrtimer_timing_start(struct hrtimer *timer,
> + struct hrtimer_clock_base *new_base,
> + ktime_t tim)
> +{
> + if (unlikely(trace_latency_hrtimer_interrupt_enabled())) {
> + ktime_t now = new_base->get_time();
> +
> + if (ktime_to_ns(tim) < ktime_to_ns(now))
> + timer->praecox = now;
> + else
> + timer->praecox = ktime_set(0, 0);
What's the whole point of this? You're adding an extra get_time() call into
that path. What for? Comments in the code are overrated, right?
> + }
> +}
> +
> +static inline void latency_hrtimer_timing_stop(struct hrtimer *timer,
> + ktime_t basenow)
> +{
> + long latency;
> + struct task_struct *task;
> +
> + if (likely(!trace_latency_hrtimer_interrupt_enabled()))
> + return;
> +
> + latency = ktime_to_ns(ktime_sub(basenow,
> + ktime_to_ns(timer->praecox) ?
> + timer->praecox : hrtimer_get_expires(timer)));
> + task = timer->function == hrtimer_wakeup ?
> + container_of(timer, struct hrtimer_sleeper,
> + timer)->task : NULL;
This is a complete horrible hack. You're tying the task evaluation into a
single instance of hrtimer users. What's the justification for this and why
do you need task at all?
> + if (latency > 0)
> + trace_latency_hrtimer_interrupt((u64) latency, task);
And how should latency become < 0 ever? hrtimer interrupt guarantees to
never expire timers prematurely.
Neither the changelog nor the code contain any information about how that
thing is to be used and what the actual value of the stored information
is.
No way that this ad hoc statistics hackery which we carry in RT for a well
known reason is going to go upstream without proper justification and a
weel thought out and documented functionality.
Thanks,
tglx
Powered by blists - more mailing lists