[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.20.1611210853290.3610@nanos>
Date: Mon, 21 Nov 2016 12:46:24 +0100 (CET)
From: Thomas Gleixner <tglx@...utronix.de>
To: joelaf <joelaf@...gle.com>
cc: linux-kernel@...r.kernel.org, John Stultz <john.stultz@...aro.org>,
Steven Rostedt <rostedt@...dmis.org>,
"Rafael J . Wysocki" <rafael.j.wysocki@...el.com>
Subject: Re: [RFC] timekeeping: Use cached readouts for monotonic and raw
clocks in suspend
On Sun, 20 Nov 2016, joelaf wrote:
> diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
> index 37dec7e..41afa1e 100644
> --- a/kernel/time/timekeeping.c
> +++ b/kernel/time/timekeeping.c
> @@ -55,6 +55,12 @@ static struct timekeeper shadow_timekeeper;
> */
> struct tk_fast {
> seqcount_t seq;
> +
> + /*
> + * first dimension is based on lower seq bit,
> + * second dimension is for offset type (real, boot, tai)
> + */
> + ktime_t offsets[2][3];
s/3/TK_OFFSET_MAX/ ?
> struct tk_read_base base[2];
The struct is cache line optimized which you wreckage. If at all we can put
the offsets at the end of the struct, but definitely not at the
beginning. clock monotonic is the case we optimize for.
> /**
> @@ -392,16 +404,23 @@ static void update_fast_timekeeper(struct tk_read_base
> *tkr, struct tk_fast *tkf
> * of the following timestamps. Callers need to be aware of that and
> * deal with it.
> */
> -static __always_inline u64 __ktime_get_fast_ns(struct tk_fast *tkf)
> +static __always_inline u64 __ktime_get_fast_ns(struct tk_fast *tkf, int
> offset)
> {
> struct tk_read_base *tkr;
> unsigned int seq;
> u64 now;
> + ktime_t *off;
>
> do {
> seq = raw_read_seqcount_latch(&tkf->seq);
> tkr = tkf->base + (seq & 0x01);
> - now = ktime_to_ns(tkr->base);
> +
> + if (offset >= 0) {
This surely wants: unlikely() around the condition.
> + off = tkf->offsets[seq & 0x01];
> + now = ktime_to_ns(ktime_add(tkr->base, off[offset]));
> + } else {
> + now = ktime_to_ns(tkr->base);
> + }
Thanks,
tglx
Powered by blists - more mailing lists