[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZugkDxmBGTJwjFXb@google.com>
Date: Mon, 16 Sep 2024 13:26:55 +0100
From: Vincent Donnefort <vdonnefort@...gle.com>
To: John Stultz <jstultz@...gle.com>, g@...gle.com
Cc: rostedt@...dmis.org, mhiramat@...nel.org,
linux-trace-kernel@...r.kernel.org, maz@...nel.org,
oliver.upton@...ux.dev, kvmarm@...ts.linux.dev, will@...nel.org,
qperret@...gle.com, kernel-team@...roid.com,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 06/13] KVM: arm64: Add clock support in the nVHE hyp
[...]
> > +static struct clock_data {
> > + struct {
> > + u32 mult;
> > + u32 shift;
> > + u64 epoch_ns;
> > + u64 epoch_cyc;
> > + } data[2];
> > + u64 cur;
> > +} trace_clock_data;
> > +
> > +/* Does not guarantee no reader on the modified bank. */
> > +void trace_clock_update(u32 mult, u32 shift, u64 epoch_ns, u64 epoch_cyc)
> > +{
> > + struct clock_data *clock = &trace_clock_data;
> > + u64 bank = clock->cur ^ 1;
> > +
> > + clock->data[bank].mult = mult;
> > + clock->data[bank].shift = shift;
> > + clock->data[bank].epoch_ns = epoch_ns;
> > + clock->data[bank].epoch_cyc = epoch_cyc;
> > +
> > + smp_store_release(&clock->cur, bank);
> > +}
>
> Can't see from the context in this patch how it's called, but with
> timekeeping there can be multiple updaters (settimeofday, timer tick,
> etc).
> So would it be smart to have some serialization here to avoid you
> don't get parallel updates here?
Yeah, it is serialized later by the trace_rb_lock spinlock.
>
> > +
> > +/* Using host provided data. Do not use for anything else than debugging. */
> > +u64 trace_clock(void)
> > +{
> > + struct clock_data *clock = &trace_clock_data;
> > + u64 bank = smp_load_acquire(&clock->cur);
> > + u64 cyc, ns;
> > +
> > + cyc = __arch_counter_get_cntpct() - clock->data[bank].epoch_cyc;
> > +
> > + ns = cyc * clock->data[bank].mult;
> > + ns >>= clock->data[bank].shift;
> > +
> > + return (u64)ns + clock->data[bank].epoch_ns;
> > +}
>
> You might want some overflow protection on the mult? See the
> max_cycles value we use in timekeeping_cycles_to_ns()
In the RFC, I was doing a 128-bits mult. Now as I have the __hyp_clock_work() in
the kernel that keeps the epoch up-to-date, I do not expect this overflow ever.
But then I could combine both to fallback on a slower 128-bits in case the
64-bits overflowed.
>
> -john
Powered by blists - more mailing lists