lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5abcb42b-91be-4043-a138-5d97cbcb5378@redhat.com>
Date: Wed, 29 Jan 2025 12:03:34 -0500
From: Waiman Long <llong@...hat.com>
To: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Cc: John Stultz <jstultz@...gle.com>, Thomas Gleixner <tglx@...utronix.de>,
 Stephen Boyd <sboyd@...nel.org>, Feng Tang <feng.tang@...el.com>,
 "Paul E. McKenney" <paulmck@...nel.org>,
 Clark Williams <clrkwllms@...nel.org>, Steven Rostedt <rostedt@...dmis.org>,
 linux-kernel@...r.kernel.org, linux-rt-devel@...ts.linux.dev
Subject: Re: [PATCH v2 2/2] clocksource: Use get_random_bytes() in
 clocksource_verify_choose_cpus()

On 1/29/25 11:34 AM, Sebastian Andrzej Siewior wrote:
> On 2025-01-24 20:54:42 [-0500], Waiman Long wrote:
>> The following bug report happened in a PREEMPT_RT kernel.
>>
>> [   30.957705] BUG: sleeping function called from invalid context at kernel/locking/spinlock_rt.c:48
>> [   30.957711] in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 2012, name: kwatchdog
>> [   30.962673] preempt_count: 1, expected: 0
>> [   30.962676] RCU nest depth: 0, expected: 0
>> [   30.962680] 3 locks held by kwatchdog/2012:
>> [   30.962684]  #0: ffffffff8af2da60 (clocksource_mutex){+.+.}-{3:3}, at: clocksource_watchdog_kthread+0x13/0x50
>> [   30.967703]  #1: ffffffff8aa8d4d0 (cpu_hotplug_lock){++++}-{0:0}, at: clocksource_verify_percpu.part.0+0x5c/0x330
>> [   30.972774]  #2: ffff9fe02f5f33e0 ((batched_entropy_u32.lock)){+.+.}-{2:2}, at: get_random_u32+0x4f/0x110
>> [   30.977827] Preemption disabled at:
>> [   30.977830] [<ffffffff88c1fe56>] clocksource_verify_percpu.part.0+0x66/0x330
>> [   30.982837] CPU: 33 PID: 2012 Comm: kwatchdog Not tainted 5.14.0-503.23.1.el9_5.x86_64+rt-debug #1
>> [   30.982843] Hardware name: HPE ProLiant DL385 Gen10 Plus/ProLiant DL385 Gen10 Plus, BIOS A42 04/29/2021
>> [   30.982846] Call Trace:
>> [   30.982850]  <TASK>
>> [   30.983821]  dump_stack_lvl+0x57/0x81
>> [   30.983821]  __might_resched.cold+0xf4/0x12f
>> [   30.983824]  rt_spin_lock+0x4c/0x100
>> [   30.988833]  get_random_u32+0x4f/0x110
>> [   30.988833]  clocksource_verify_choose_cpus+0xab/0x1a0
>> [   30.988833]  clocksource_verify_percpu.part.0+0x6b/0x330
>> [   30.993894]  __clocksource_watchdog_kthread+0x193/0x1a0
>> [   30.993898]  clocksource_watchdog_kthread+0x18/0x50
>> [   30.993898]  kthread+0x114/0x140
>> [   30.993898]  ret_from_fork+0x2c/0x50
>> [   31.002864]  </TASK>
>>
>> It is due to the fact that get_random_u32() is called in
>> clocksource_verify_choose_cpus() with preemption disabled.
>> If crng_ready() is true by the time get_random_u32() is called, The
>> batched_entropy_32 local lock will be acquired. In PREEMPT_RT kernel,
>> it is a rtmutex and we can't acquire it with preemption disabled.
>>
>> Fix this problem by using the less random get_random_bytes() function
>> which will not take any lock. In fact, it has the same random-ness as
>> get_random_u32_below() when crng_ready() is false.
> So how does get_random_bytes() not take any locks? It takes locks in my
> tree. You two have a lock less tree?

You are right. I forgot to check the crng_make_state() call in 
_get_random_bytes() which does take lock.


>
> In case your tree is not lock less yet, couldn't we perform the loop
> verify_n_cpus+1 times without disabled preemption? Then disable
> preemption after return from clocksource_verify_choose_cpus() and then
> either remove current CPU from the list if it is or remove a random one
> so that we get back to verify_n_cpus CPUs set.
>
> Alternatively, (and this might be easier) use migrate_disable() instead
> of preempt_disable() and only use preempt_disable() within the
> for_each_cpu() loop if delta is important (which I assume it is).
>
> But all this would avoid having to run with disabled preemption within
> clocksource_verify_choose_cpus() while having the guarantees you need.

I guess we will have to break clocksource_verify_choose_cpus() into 2 
separate parts, one without preemption disabled and other one with 
preemption disabled. I don't think it is a good idea to just use 
migrate_disable() as we may get too much latency that can affect the 
test result.

I will send out a v3 patch to fix that.

Thanks,
Longman


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ