[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <54caed52-2859-df94-1e3b-223397d42602@arm.com>
Date: Fri, 4 Jan 2019 16:49:48 +0000
From: Marc Zyngier <marc.zyngier@....com>
To: Pavel Tatashin <pasha.tatashin@...een.com>
Cc: catalin.marinas@....com, Will Deacon <will.deacon@....com>,
Andrew Morton <akpm@...ux-foundation.org>,
rppt@...ux.vnet.ibm.com, Michal Hocko <mhocko@...e.com>,
Ard Biesheuvel <ard.biesheuvel@...aro.org>,
andrew.murray@....com, james.morse@....com, sboyd@...nel.org,
linux-arm-kernel@...ts.infradead.org,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v3 3/3] arm64: Early boot time stamps
On 04/01/2019 16:23, Pavel Tatashin wrote:
Hi Pavel,
>>> We could limit arm64 approach only for chips where cntvct_el0 is
>>> working: i.e. frequency is known, and the clock is stable, meaning
>>> cannot go backward. Perhaps we would start early clock a little later,
>>> but at least it will be available for the sane chips. The only
>>> question, where during boot time this is known.
>>
>> How do you propose we do that? Defective timers can be a property of
>> the implementation, of the integration, or both. In any case, it
>> requires firmware support (DT, ACPI). All that is only available quite
>> late, and moving it earlier is not easily doable.
>
> OK, but could we at least whitelist something early with expectation
> that the future chips won't be bogus?
Just as I wish we had universal world peace. Timer integration is
probably the most broken thing in the whole ARM ecosystem (clock
domains, Gray code and general incompetence do get in the way). And as I
said above, retecting a broken implementation usually relies on some
firmware indication, which is only available at a later time (and I'm
trying really hard to keep the errata handling in the timer code).
>>> Another approach is to modify sched_clock() in
>>> kernel/time/sched_clock.c to never return backward value during boot.
>>>
>>> 1. Rename current implementation of sched_clock() to sched_clock_raw()
>>> 2. New sched_clock() would look like this:
>>>
>>> u64 sched_clock(void)
>>> {
>>> if (static_branch(early_unstable_clock))
>>> return sched_clock_unstable();
>>> else
>>> return sched_clock_raw();
>>> }
>>>
>>> 3. sched_clock_unstable() would look like this:
>>>
>>> u64 sched_clock_unstable(void)
>>> {
>>> again:
>>> static u64 old_clock;
>>> u64 new_clock = sched_clock_raw();
>>> static u64 old_clock_read = READ_ONCE(old_clock);
>>> /* It is ok if time does not progress, but don't allow to go backward */
>>> if (new_clock < old_clock_read)
>>> return old_clock_read;
>>> /* update the old_clock value */
>>> if (cmpxchg64(&old_clock, old_clock_read, new_clock) != old_clock_read)
>>> goto again;
>>> return new_clock;
>>> }
>>
>> You now have an "unstable" clock that is only allowed to move forward,
>> until you switch to the real one. And at handover time, anything can
>> happen.
>>
>> It is one thing to allow for the time stamping to be imprecise. But
>> imposing the same behaviour on other parts of the kernel that have so
>> far relied on a strictly monotonic sched_clock feels like a bad idea.
>
> sched_clock() will still be strictly monotonic. During switch over we
> will guarantee to continue from where the early clock left.
Not quite. There is at least one broken integration that results in
large, spurious jumps ahead. If one of these jumps happens during the
"unstable" phase, we'll only return old_clock. At some point, we switch
early_unstable_clock to be false, as we've now properly initialized the
timer and found the appropriate workaround. We'll now return a much
smaller value. sched_clock continuity doesn't seem to apply here, as
you're not registering a new sched_clock (or at least that's not how I
understand your code above).
>> What I'm proposing is that we allow architectures to override the hard
>> tie between local_clock/sched_clock and kernel log time stamping, with
>> the default being of course what we have today. This gives a clean
>> separation between the two when the architecture needs to delay the
>> availability of sched_clock until implementation requirements are
>> discovered. It also keep sched_clock simple and efficient.
>>
>> To illustrate what I'm trying to argue for, I've pushed out a couple
>> of proof of concept patches here[1]. I've briefly tested them in a
>> guest, and things seem to work OK.
>
> What I am worried is that decoupling time stamps from the
> sched_clock() will cause uptime and other commands that show boot time
> not to correlate with timestamps in dmesg with these changes. For them
> to correlate we would still have to have a switch back to
> local_clock() in timestamp_clock() after we are done with early boot,
> which brings us back to using a temporarily unstable clock that I
> proposed above but without adding an architectural hook for it. Again,
> we would need to solve the problem of time continuity during switch
> over, which is not a hard problem to solve, as we do it already in
> sched_clock.c, and everytime clocksource changes.
>
> During early boot time stamps project for x86 we were extra careful to
> make sure that they stay the same.
I can see two ways to achieve this requirement:
- we allow timestamp_clock to fall-back to sched_clock once it becomes
non-zero. It has the drawback of resetting the time stamping in the
middle of the boot, which isn't great.
- we allow sched_clock to inherit the timestamp_clock value instead of
starting at zero like it does now. Not sure if that breaks anything, but
that's worth trying (it should be a matter of setting new_epoch to zero
in sched_clock_register).
Thanks,
M.
--
Jazz is not dead. It just smells funny...
Powered by blists - more mailing lists