[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210412042157.GA1889369@paulmck-ThinkPad-P17-Gen-1>
Date: Sun, 11 Apr 2021 21:21:57 -0700
From: "Paul E. McKenney" <paulmck@...nel.org>
To: Thomas Gleixner <tglx@...utronix.de>
Cc: linux-kernel@...r.kernel.org, john.stultz@...aro.org,
sboyd@...nel.org, corbet@....net, Mark.Rutland@....com,
maz@...nel.org, kernel-team@...com, neeraju@...eaurora.org,
ak@...ux.intel.com
Subject: Re: [PATCH v7 clocksource 3/5] clocksource: Check per-CPU clock
synchronization when marked unstable
On Sun, Apr 11, 2021 at 09:46:12AM -0700, Paul E. McKenney wrote:
> On Sun, Apr 11, 2021 at 12:33:44PM +0200, Thomas Gleixner wrote:
> > On Sat, Apr 10 2021 at 17:20, Paul E. McKenney wrote:
> > > On Sat, Apr 10, 2021 at 11:00:25AM +0200, Thomas Gleixner wrote:
> > >> > + if (WARN_ON_ONCE(!cs))
> > >> > + return;
> > >> > + pr_warn("Checking clocksource %s synchronization from CPU %d.\n",
> > >> > + cs->name, smp_processor_id());
> > >> > + cpumask_clear(&cpus_ahead);
> > >> > + cpumask_clear(&cpus_behind);
> > >> > + csnow_begin = cs->read(cs);
> > >>
> > >> So this is invoked via work and the actual clocksource change is done
> > >> via work too. Once the clocksource is not longer actively used for
> > >> timekeeping it can go away. What's guaranteeing that this runs prior to
> > >> the clocksource change and 'cs' is valid throughout this function?
> > >
> > > From what I can see, cs->read() doesn't care whether or not the
> > > clocksource has been marked unstable. So it should be OK to call
> > > cs->read() before, during, or after the call to __clocksource_unstable().
> > >
> > > Also, this is only done on clocksources marked CLOCK_SOURCE_VERIFY_PERCPU,
> > > so any clocksource that did not like cs->read() being invoked during
> > > or after the call to __clocksource_unstable() should leave off the
> > > CLOCK_SOURCE_VERIFY_PERCPU bit.
> > >
> > > Or did I take a wrong turn somewhere in the pointers to functions?
> >
> > Right. cs->read() does not care, but what guarantees that cs is valid
> > and not freed yet? It's not an issue with TSC and KVMCLOCK, but
> > conceptually the following is possible:
> >
> > watchdog()
> > queue_work(synccheck);
> > queue_work(clocksource_change);
> >
> > work:
> > synccheck() clocksource_change()
> > preemption ...
> > ...
> > some_other_code():
> > unregister_clocksource(cs)
> > free(cs)
> > cs->read() <- UAF
>
> Got it, with the ingenic_tcu_init() function being case in point.
> It invokes clcoksource_unregister() shortly followed by clk_put(), which,
> if I found the correct clk_put(), can kfree() it.
>
> Thank you!
>
> > >> > + queue_work(system_highpri_wq, &clocksource_verify_work);
> > >>
> > >> This does not guarantee anything. So why does this need an extra work
> > >> function which is scheduled seperately?
> > >
> > > Because I was concerned about doing smp_call_function() while holding
> > > watchdog_lock, which is also acquired elsewhere using spin_lock_irqsave().
> > > And it still looks like on x86 that spin_lock_irqsave() spins with irqs
> > > disabled, which could result in deadlock. The smp_call_function_single()
> > > would wait for the target CPU to enable interrupts, which would not
> > > happen until after the smp_call_function_single() returned due to its
> > > caller holding watchdog_lock.
> > >
> > > Or is there something that I am missing that prevents this deadlock
> > > from occurring?
> >
> > The unstable mechanism is:
> >
> > watchdog()
> > __clocksource_unstable()
> > schedule_work(&watchdog_work);
> >
> > watchdog_work()
> > kthread_run(clocksource_watchdog_thread);
> >
> > cs_watchdog_thread()
> > mutex_lock(&clocksource_mutex);
> > if (__clocksource_watchdog_kthread())
> > clocksource_select();
> > mutex_unlock(&clocksource_mutex);
> >
> > So what prevents you from doing that right in watchdog_work() or even in
> > cs_watchdog_thread() properly ordered against the actual clocksource
> > switch?
> >
> > Hmm?
>
> My own confusion, apparently. :-/
>
> So I need to is inline clocksource_verify_percpu_wq()
> into clocksource_verify_percpu() and then move the call to
> clocksource_verify_percpu() to __clocksource_watchdog_kthread(), right
> before the existing call to list_del_init(). Will do!
Except that this triggers the WARN_ON_ONCE() in smp_call_function_single()
due to interrupts being disabled across that list_del_init().
Possibilities include:
1. Figure out why interrupts must be disabled only sometimes while
holding watchdog_lock, in the hope that they need not be across
the entire critical section for __clocksource_watchdog_kthread().
As in:
local_irq_restore(flags);
clocksource_verify_percpu(cs);
local_irq_save(flags);
Trying this first with lockdep enabled. Might be spectacular.
2. Invoke clocksource_verify_percpu() from its original
location in clocksource_watchdog(), just before the call to
__clocksource_unstable(). This relies on the fact that
clocksource_watchdog() acquires watchdog_lock without
disabling interrupts.
3. Restrict CLOCK_SOURCE_VERIFY_PERCPU to clocksource structures
that are statically allocated, thus avoiding the use-after-free
problem. Rely on KASAN to enforce this restriction.
4. Add reference counting or some such to clock sources.
5. Your ideas here.
I will give this more thought, but #2 is looking pretty good at this point.
Thanx, Paul
Powered by blists - more mailing lists