[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87blam4iqe.ffs@nanos.tec.linutronix.de>
Date: Sat, 10 Apr 2021 11:00:25 +0200
From: Thomas Gleixner <tglx@...utronix.de>
To: paulmck@...nel.org, linux-kernel@...r.kernel.org
Cc: john.stultz@...aro.org, sboyd@...nel.org, corbet@....net,
Mark.Rutland@....com, maz@...nel.org, kernel-team@...com,
neeraju@...eaurora.org, ak@...ux.intel.com,
"Paul E. McKenney" <paulmck@...nel.org>
Subject: Re: [PATCH v7 clocksource 3/5] clocksource: Check per-CPU clock synchronization when marked unstable
On Fri, Apr 02 2021 at 15:49, paulmck wrote:
>
> +static void clocksource_verify_percpu_wq(struct work_struct *unused)
> +{
> + int cpu;
> + struct clocksource *cs;
> + int64_t cs_nsec;
> + u64 csnow_begin;
> + u64 csnow_end;
> + u64 delta;
Please use reverse fir tree ordering and stick variables of the same
type together:
u64 csnow_begin, csnow_end, delta;
struct clocksource *cs;
s64 cs_nsec;
int cpu;
> +
> + cs = smp_load_acquire(&clocksource_verify_work_cs); // pairs with release
Please don't use tail comments. They are a horrible distraction.
> + if (WARN_ON_ONCE(!cs))
> + return;
> + pr_warn("Checking clocksource %s synchronization from CPU %d.\n",
> + cs->name, smp_processor_id());
> + cpumask_clear(&cpus_ahead);
> + cpumask_clear(&cpus_behind);
> + csnow_begin = cs->read(cs);
So this is invoked via work and the actual clocksource change is done
via work too. Once the clocksource is not longer actively used for
timekeeping it can go away. What's guaranteeing that this runs prior to
the clocksource change and 'cs' is valid throughout this function?
> + queue_work(system_highpri_wq, &clocksource_verify_work);
This does not guarantee anything. So why does this need an extra work
function which is scheduled seperately?
Thanks,
tglx
Powered by blists - more mailing lists