[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <901c9b9f-d393-4b84-9b6e-fe79e316ee2d@redhat.com>
Date: Fri, 24 Jan 2025 15:36:21 -0500
From: Waiman Long <llong@...hat.com>
To: paulmck@...nel.org
Cc: John Stultz <jstultz@...gle.com>, Thomas Gleixner <tglx@...utronix.de>,
Stephen Boyd <sboyd@...nel.org>, Feng Tang <feng.tang@...el.com>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Clark Williams <clrkwllms@...nel.org>, Steven Rostedt <rostedt@...dmis.org>,
linux-kernel@...r.kernel.org, linux-rt-devel@...ts.linux.dev
Subject: Re: [PATCH 2/2] clocksource: Defer preempt_disable() after
clocksource_verify_choose_cpus()
On 1/24/25 3:25 PM, Paul E. McKenney wrote:
> On Fri, Jan 24, 2025 at 01:59:23PM -0500, Waiman Long wrote:
>> The following bug report happened in a PREEMPT_RT kernel.
>>
>> [ 30.957705] BUG: sleeping function called from invalid context at kernel/locking/spinlock_rt.c:48
>> [ 30.957711] in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 2012, name: kwatchdog
>> [ 30.962673] preempt_count: 1, expected: 0
>> [ 30.962676] RCU nest depth: 0, expected: 0
>> [ 30.962680] 3 locks held by kwatchdog/2012:
>> [ 30.962684] #0: ffffffff8af2da60 (clocksource_mutex){+.+.}-{3:3}, at: clocksource_watchdog_kthread+0x13/0x50
>> [ 30.967703] #1: ffffffff8aa8d4d0 (cpu_hotplug_lock){++++}-{0:0}, at: clocksource_verify_percpu.part.0+0x5c/0x330
>> [ 30.972774] #2: ffff9fe02f5f33e0 ((batched_entropy_u32.lock)){+.+.}-{2:2}, at: get_random_u32+0x4f/0x110
>> [ 30.977827] Preemption disabled at:
>> [ 30.977830] [<ffffffff88c1fe56>] clocksource_verify_percpu.part.0+0x66/0x330
>> [ 30.982837] CPU: 33 PID: 2012 Comm: kwatchdog Not tainted 5.14.0-503.23.1.el9_5.x86_64+rt-debug #1
>> [ 30.982843] Hardware name: HPE ProLiant DL385 Gen10 Plus/ProLiant DL385 Gen10 Plus, BIOS A42 04/29/2021
>> [ 30.982846] Call Trace:
>> [ 30.982850] <TASK>
>> [ 30.983821] dump_stack_lvl+0x57/0x81
>> [ 30.983821] __might_resched.cold+0xf4/0x12f
>> [ 30.983824] rt_spin_lock+0x4c/0x100
>> [ 30.988833] get_random_u32+0x4f/0x110
>> [ 30.988833] clocksource_verify_choose_cpus+0xab/0x1a0
>> [ 30.988833] clocksource_verify_percpu.part.0+0x6b/0x330
>> [ 30.993894] __clocksource_watchdog_kthread+0x193/0x1a0
>> [ 30.993898] clocksource_watchdog_kthread+0x18/0x50
>> [ 30.993898] kthread+0x114/0x140
>> [ 30.993898] ret_from_fork+0x2c/0x50
>> [ 31.002864] </TASK>
>>
>> It is due to the fact that get_random_u32() is called in
>> clocksource_verify_choose_cpus() with preemption disabled.
>> If crng_ready() is true by the time get_random_u32() is called, The
>> batched_entropy_32 local lock will be acquired. In PREEMPT_RT kernel,
>> it is a rtmutex and we can't acquire it with preemption disabled.
>>
>> To avoid this problem, we can't call get_random_u32() with preemption
>> disabled. However, smp_processor_id() has to be called with preemption
>> disabled though and we have to exclude the current CPU from the
>> cpus_chosen list to be tested.
>>
>> Extract current CPU removal code out from
>> clocksource_verify_choose_cpus() and defer the preempt_disable()
>> call to after clocksource_verify_choose_cpus() and before
>> current CPU removal. Also use raw_smp_processor_id() in
>> clocksource_verify_choose_cpus().
>>
>> Fixes: 7560c02bdffb ("clocksource: Check per-CPU clock synchronization when marked unstable")
>> Signed-off-by: Waiman Long <longman@...hat.com>
> Good catch!
>
> But we don't need cryptographically secure random numbers (or blistering
> speed) here. Substituting something like torture_random()%nr_cpu_ids
> for get_random_u32_below() work?
>
> Thanx, Paul
That should work too. I can update the patch to use a less random
generator if others have no objection.
Cheers,
Longman
>
>> ---
>> kernel/time/clocksource.c | 19 +++++++++++--------
>> 1 file changed, 11 insertions(+), 8 deletions(-)
>>
>> diff --git a/kernel/time/clocksource.c b/kernel/time/clocksource.c
>> index 77d9566d3aa6..8ffec49971ac 100644
>> --- a/kernel/time/clocksource.c
>> +++ b/kernel/time/clocksource.c
>> @@ -313,7 +313,6 @@ static void clocksource_verify_choose_cpus(void)
>> if (n < 0) {
>> /* Check all of the CPUs. */
>> cpumask_copy(&cpus_chosen, cpu_online_mask);
>> - cpumask_clear_cpu(smp_processor_id(), &cpus_chosen);
>> return;
>> }
>>
>> @@ -322,9 +321,13 @@ static void clocksource_verify_choose_cpus(void)
>> if (n == 0 || num_online_cpus() <= 1)
>> return;
>>
>> - /* Make sure to select at least one CPU other than the current CPU. */
>> + /*
>> + * Make sure to select at least one CPU other than the current CPU.
>> + * raw_smp_processor_id() is used opportunistically here as
>> + * preemption isn't disabled.
>> + */
>> cpu = cpumask_first(cpu_online_mask);
>> - if (cpu == smp_processor_id())
>> + if (cpu == raw_smp_processor_id())
>> cpu = cpumask_next(cpu, cpu_online_mask);
>> if (WARN_ON_ONCE(cpu >= nr_cpu_ids))
>> return;
>> @@ -349,9 +352,6 @@ static void clocksource_verify_choose_cpus(void)
>> if (!WARN_ON_ONCE(cpu >= nr_cpu_ids))
>> cpumask_set_cpu(cpu, &cpus_chosen);
>> }
>> -
>> - /* Don't verify ourselves. */
>> - cpumask_clear_cpu(smp_processor_id(), &cpus_chosen);
>> }
>>
>> static void clocksource_verify_one_cpu(void *csin)
>> @@ -373,15 +373,18 @@ void clocksource_verify_percpu(struct clocksource *cs)
>> cpumask_clear(&cpus_ahead);
>> cpumask_clear(&cpus_behind);
>> cpus_read_lock();
>> - preempt_disable();
>> clocksource_verify_choose_cpus();
>> + preempt_disable();
>> +
>> + /* Don't verify ourselves. */
>> + testcpu = smp_processor_id();
>> + cpumask_clear_cpu(testcpu, &cpus_chosen);
>> if (cpumask_empty(&cpus_chosen)) {
>> preempt_enable();
>> cpus_read_unlock();
>> pr_warn("Not enough CPUs to check clocksource '%s'.\n", cs->name);
>> return;
>> }
>> - testcpu = smp_processor_id();
>> pr_info("Checking clocksource %s synchronization from CPU %d to CPUs %*pbl.\n",
>> cs->name, testcpu, cpumask_pr_args(&cpus_chosen));
>> for_each_cpu(cpu, &cpus_chosen) {
>> --
>> 2.47.1
>>
Powered by blists - more mailing lists