[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALAqxLUv9Rd_eA7u9gZNFvTNrT1Z67RpHLozdeNC0W2yZm=Heg@mail.gmail.com>
Date: Mon, 17 Sep 2018 13:12:04 -0700
From: John Stultz <john.stultz@...aro.org>
To: Andy Lutomirski <luto@...nel.org>
Cc: Thomas Gleixner <tglx@...utronix.de>,
LKML <linux-kernel@...r.kernel.org>, X86 ML <x86@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Matt Rickard <matt@...trans.com.au>,
Stephen Boyd <sboyd@...nel.org>,
Florian Weimer <fweimer@...hat.com>,
"K. Y. Srinivasan" <kys@...rosoft.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
devel@...uxdriverproject.org,
Linux Virtualization <virtualization@...ts.linux-foundation.org>,
Paolo Bonzini <pbonzini@...hat.com>,
Arnd Bergmann <arnd@...db.de>, Juergen Gross <jgross@...e.com>
Subject: Re: [patch 09/11] x86/vdso: Simplify the invalid vclock case
On Mon, Sep 17, 2018 at 12:25 PM, Andy Lutomirski <luto@...nel.org> wrote:
> On Fri, Sep 14, 2018 at 5:50 AM, Thomas Gleixner <tglx@...utronix.de> wrote:
>> The code flow for the vclocks is convoluted as it requires the vclocks
>> which can be invalidated separately from the vsyscall_gtod_data sequence to
>> store the fact in a separate variable. That's inefficient.
>>
>
>> notrace static int do_hres(clockid_t clk, struct timespec *ts)
>> {
>> struct vgtod_ts *base = >od->basetime[clk];
>> unsigned int seq;
>> - int mode;
>> - u64 ns;
>> + u64 cycles, ns;
>>
>> do {
>> seq = gtod_read_begin(gtod);
>> - mode = gtod->vclock_mode;
>> ts->tv_sec = base->sec;
>> ns = base->nsec;
>> - ns += vgetsns(&mode);
>> + cycles = vgetcyc(gtod->vclock_mode);
>> + if (unlikely((s64)cycles < 0))
>> + return vdso_fallback_gettime(clk, ts);
>
> i was contemplating this, and I would suggest one of two optimizations:
>
> 1. have all the helpers return a struct containing a u64 and a bool,
> and use that bool. The compiler should do okay.
>
> 2. Be sneaky. Later in the series, you do:
>
> if (unlikely((s64)cycles < 0))
> return vdso_fallback_gettime(clk, ts);
> - ns += (cycles - gtod->cycle_last) * gtod->mult;
> + if (cycles > last)
> + ns += (cycles - last) * gtod->mult;
>
> How about:
>
> if (unlikely((s64)cycles <= last)) {
> if (cycles < 0) [or cycles == -1 or whatever]
> return vdso_fallback_gettime;
> } else {
> ns += (cycles - last) * gtod->mult;
> }
>
> which should actually make this whole mess be essentially free.
>
> Also, I'm not entirely convinced that this "last" thing is needed at
> all. John, what's the scenario under which we need it?
So my memory is probably a bit foggy, but I recall that as we
accelerated gettimeofday, we found that even on systems that claimed
to have synced TSCs, they were actually just slightly out of sync.
Enough that right after cycles_last had been updated, a read on
another cpu could come in just behind cycles_last, resulting in a
negative interval causing lots of havoc.
So the sanity check is needed to avoid that case.
thanks
-john
Powered by blists - more mailing lists