[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210411002020.GV4510@paulmck-ThinkPad-P17-Gen-1>
Date: Sat, 10 Apr 2021 17:20:20 -0700
From: "Paul E. McKenney" <paulmck@...nel.org>
To: Thomas Gleixner <tglx@...utronix.de>
Cc: linux-kernel@...r.kernel.org, john.stultz@...aro.org,
sboyd@...nel.org, corbet@....net, Mark.Rutland@....com,
maz@...nel.org, kernel-team@...com, neeraju@...eaurora.org,
ak@...ux.intel.com
Subject: Re: [PATCH v7 clocksource 3/5] clocksource: Check per-CPU clock
synchronization when marked unstable
On Sat, Apr 10, 2021 at 11:00:25AM +0200, Thomas Gleixner wrote:
> On Fri, Apr 02 2021 at 15:49, paulmck wrote:
> >
> > +static void clocksource_verify_percpu_wq(struct work_struct *unused)
> > +{
> > + int cpu;
> > + struct clocksource *cs;
> > + int64_t cs_nsec;
> > + u64 csnow_begin;
> > + u64 csnow_end;
> > + u64 delta;
>
> Please use reverse fir tree ordering and stick variables of the same
> type together:
>
> u64 csnow_begin, csnow_end, delta;
> struct clocksource *cs;
> s64 cs_nsec;
> int cpu;
Will do.
> > +
> > + cs = smp_load_acquire(&clocksource_verify_work_cs); // pairs with release
>
> Please don't use tail comments. They are a horrible distraction.
I will remove it.
> > + if (WARN_ON_ONCE(!cs))
> > + return;
> > + pr_warn("Checking clocksource %s synchronization from CPU %d.\n",
> > + cs->name, smp_processor_id());
> > + cpumask_clear(&cpus_ahead);
> > + cpumask_clear(&cpus_behind);
> > + csnow_begin = cs->read(cs);
>
> So this is invoked via work and the actual clocksource change is done
> via work too. Once the clocksource is not longer actively used for
> timekeeping it can go away. What's guaranteeing that this runs prior to
> the clocksource change and 'cs' is valid throughout this function?
>From what I can see, cs->read() doesn't care whether or not the
clocksource has been marked unstable. So it should be OK to call
cs->read() before, during, or after the call to __clocksource_unstable().
Also, this is only done on clocksources marked CLOCK_SOURCE_VERIFY_PERCPU,
so any clocksource that did not like cs->read() being invoked during
or after the call to __clocksource_unstable() should leave off the
CLOCK_SOURCE_VERIFY_PERCPU bit.
Or did I take a wrong turn somewhere in the pointers to functions?
> > + queue_work(system_highpri_wq, &clocksource_verify_work);
>
> This does not guarantee anything. So why does this need an extra work
> function which is scheduled seperately?
Because I was concerned about doing smp_call_function() while holding
watchdog_lock, which is also acquired elsewhere using spin_lock_irqsave().
And it still looks like on x86 that spin_lock_irqsave() spins with irqs
disabled, which could result in deadlock. The smp_call_function_single()
would wait for the target CPU to enable interrupts, which would not
happen until after the smp_call_function_single() returned due to its
caller holding watchdog_lock.
Or is there something that I am missing that prevents this deadlock
from occurring?
Thanx, Paul
Powered by blists - more mailing lists