[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <878s5p2jqv.ffs@nanos.tec.linutronix.de>
Date: Sun, 11 Apr 2021 12:33:44 +0200
From: Thomas Gleixner <tglx@...utronix.de>
To: paulmck@...nel.org
Cc: linux-kernel@...r.kernel.org, john.stultz@...aro.org,
sboyd@...nel.org, corbet@....net, Mark.Rutland@....com,
maz@...nel.org, kernel-team@...com, neeraju@...eaurora.org,
ak@...ux.intel.com
Subject: Re: [PATCH v7 clocksource 3/5] clocksource: Check per-CPU clock synchronization when marked unstable
On Sat, Apr 10 2021 at 17:20, Paul E. McKenney wrote:
> On Sat, Apr 10, 2021 at 11:00:25AM +0200, Thomas Gleixner wrote:
>> > + if (WARN_ON_ONCE(!cs))
>> > + return;
>> > + pr_warn("Checking clocksource %s synchronization from CPU %d.\n",
>> > + cs->name, smp_processor_id());
>> > + cpumask_clear(&cpus_ahead);
>> > + cpumask_clear(&cpus_behind);
>> > + csnow_begin = cs->read(cs);
>>
>> So this is invoked via work and the actual clocksource change is done
>> via work too. Once the clocksource is not longer actively used for
>> timekeeping it can go away. What's guaranteeing that this runs prior to
>> the clocksource change and 'cs' is valid throughout this function?
>
> From what I can see, cs->read() doesn't care whether or not the
> clocksource has been marked unstable. So it should be OK to call
> cs->read() before, during, or after the call to __clocksource_unstable().
>
> Also, this is only done on clocksources marked CLOCK_SOURCE_VERIFY_PERCPU,
> so any clocksource that did not like cs->read() being invoked during
> or after the call to __clocksource_unstable() should leave off the
> CLOCK_SOURCE_VERIFY_PERCPU bit.
>
> Or did I take a wrong turn somewhere in the pointers to functions?
Right. cs->read() does not care, but what guarantees that cs is valid
and not freed yet? It's not an issue with TSC and KVMCLOCK, but
conceptually the following is possible:
watchdog()
queue_work(synccheck);
queue_work(clocksource_change);
work:
synccheck() clocksource_change()
preemption ...
...
some_other_code():
unregister_clocksource(cs)
free(cs)
cs->read() <- UAF
>> > + queue_work(system_highpri_wq, &clocksource_verify_work);
>>
>> This does not guarantee anything. So why does this need an extra work
>> function which is scheduled seperately?
>
> Because I was concerned about doing smp_call_function() while holding
> watchdog_lock, which is also acquired elsewhere using spin_lock_irqsave().
> And it still looks like on x86 that spin_lock_irqsave() spins with irqs
> disabled, which could result in deadlock. The smp_call_function_single()
> would wait for the target CPU to enable interrupts, which would not
> happen until after the smp_call_function_single() returned due to its
> caller holding watchdog_lock.
>
> Or is there something that I am missing that prevents this deadlock
> from occurring?
The unstable mechanism is:
watchdog()
__clocksource_unstable()
schedule_work(&watchdog_work);
watchdog_work()
kthread_run(clocksource_watchdog_thread);
cs_watchdog_thread()
mutex_lock(&clocksource_mutex);
if (__clocksource_watchdog_kthread())
clocksource_select();
mutex_unlock(&clocksource_mutex);
So what prevents you from doing that right in watchdog_work() or even in
cs_watchdog_thread() properly ordered against the actual clocksource
switch?
Hmm?
Thanks,
tglx
Powered by blists - more mailing lists