[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87sg3prsbt.ffs@nanos.tec.linutronix.de>
Date: Sat, 17 Apr 2021 14:47:18 +0200
From: Thomas Gleixner <tglx@...utronix.de>
To: "Paul E. McKenney" <paulmck@...nel.org>
Cc: linux-kernel@...r.kernel.org, john.stultz@...aro.org,
sboyd@...nel.org, corbet@....net, Mark.Rutland@....com,
maz@...nel.org, kernel-team@...com, neeraju@...eaurora.org,
ak@...ux.intel.com, "Paul E. McKenney" <paulmck@...nel.org>,
Chris Mason <clm@...com>
Subject: Re: [PATCH v8 clocksource 3/5] clocksource: Check per-CPU clock synchronization when marked unstable
On Tue, Apr 13 2021 at 21:36, Paul E. McKenney wrote:
Bah, hit send too quick.
> + cpumask_clear(&cpus_ahead);
> + cpumask_clear(&cpus_behind);
> + preempt_disable();
Daft.
> + testcpu = smp_processor_id();
> + pr_warn("Checking clocksource %s synchronization from CPU %d.\n", cs->name, testcpu);
> + for_each_online_cpu(cpu) {
> + if (cpu == testcpu)
> + continue;
> + csnow_begin = cs->read(cs);
> + smp_call_function_single(cpu, clocksource_verify_one_cpu, cs, 1);
> + csnow_end = cs->read(cs);
As this must run with interrupts enabled, that's a pretty rough
approximation like measuring wind speed with a wet thumb.
Wouldn't it be smarter to let the remote CPU do the watchdog dance and
take that result? i.e. split out more of the watchdog code so that you
can get the nanoseconds delta on that remote CPU to the watchdog.
> + delta = (s64)((csnow_mid - csnow_begin) & cs->mask);
> + if (delta < 0)
> + cpumask_set_cpu(cpu, &cpus_behind);
> + delta = (csnow_end - csnow_mid) & cs->mask;
> + if (delta < 0)
> + cpumask_set_cpu(cpu, &cpus_ahead);
> + delta = clocksource_delta(csnow_end, csnow_begin, cs->mask);
> + cs_nsec = clocksource_cyc2ns(delta, cs->mult, cs->shift);
> + if (firsttime || cs_nsec > cs_nsec_max)
> + cs_nsec_max = cs_nsec;
> + if (firsttime || cs_nsec < cs_nsec_min)
> + cs_nsec_min = cs_nsec;
> + firsttime = 0;
int64_t cs_nsec_max = 0, cs_nsec_min = LLONG_MAX;
and then the firsttime muck is not needed at all.
Thanks,
tglx
Powered by blists - more mailing lists