lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1e5910b1-ea54-4b7a-a68b-a02634a517dd@linux.dev>
Date: Thu, 14 Nov 2024 10:43:26 +0800
From: Kunwu Chan <kunwu.chan@...ux.dev>
To: Thomas Gleixner <tglx@...utronix.de>, Kunwu Chan <kunwu.chan@...ux.dev>,
 ast@...nel.org, daniel@...earbox.net, andrii@...nel.org,
 martin.lau@...ux.dev, eddyz87@...il.com, song@...nel.org,
 yonghong.song@...ux.dev, john.fastabend@...il.com, kpsingh@...nel.org,
 sdf@...ichev.me, haoluo@...gle.com, jolsa@...nel.org, bigeasy@...utronix.de,
 clrkwllms@...nel.org, rostedt@...dmis.org
Cc: bpf@...r.kernel.org, linux-kernel@...r.kernel.org,
 linux-rt-devel@...ts.linux.dev,
 syzbot+b506de56cbbb63148c33@...kaller.appspotmail.com
Subject: Re: [PATCH] bpf: Convert lpm_trie::lock to 'raw_spinlock_t'

Thanks all for the reply.

On 2024/11/12 23:08, Thomas Gleixner wrote:
> On Fri, Nov 08 2024 at 14:32, Kunwu Chan wrote:
>> When PREEMPT_RT is enabled, 'spinlock_t' becomes preemptible
>> and bpf program has owned a raw_spinlock under a interrupt handler,
>> which results in invalid lock acquire context.
> This explanation is just wrong.
>
> The problem has nothing to do with an interrupt handler. Interrupt
> handlers on RT kernels are force threaded.
>
>>   __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
>>   _raw_spin_lock_irqsave+0xd5/0x120 kernel/locking/spinlock.c:162
>>   trie_delete_elem+0x96/0x6a0 kernel/bpf/lpm_trie.c:462
>>   bpf_prog_2c29ac5cdc6b1842+0x43/0x47
>>   bpf_dispatcher_nop_func include/linux/bpf.h:1290 [inline]
>>   __bpf_prog_run include/linux/filter.h:701 [inline]
>>   bpf_prog_run include/linux/filter.h:708 [inline]
>>   __bpf_trace_run kernel/trace/bpf_trace.c:2340 [inline]
>>   bpf_trace_run1+0x2ca/0x520 kernel/trace/bpf_trace.c:2380
>>   trace_workqueue_activate_work+0x186/0x1f0 include/trace/events/workqueue.h:59
>>   __queue_work+0xc7b/0xf50 kernel/workqueue.c:2338
> The problematic lock nesting is the work queue pool lock, which is a raw
> spinlock.
>
>> @@ -330,7 +330,7 @@ static long trie_update_elem(struct bpf_map *map,
>>   	if (key->prefixlen > trie->max_prefixlen)
>>   		return -EINVAL;
>>   
>> -	spin_lock_irqsave(&trie->lock, irq_flags);
>> +	raw_spin_lock_irqsave(&trie->lock, irq_flags);
>>   
>>   	/* Allocate and fill a new node */
> Making this a raw spinlock moves the problem from the BPF trie code into
> the memory allocator. On RT the memory allocator cannot be invoked under
> a raw spinlock.
I'am newbiee in this field. But actually when i change it to a raw 
spinlock, the problem syzbot reported dispeared.
If don't change like this, we should do what to deal with this problem, 
if you have any good idea, pls tell me to do.
> Thanks,
>
>          tglx
>
-- 
Thanks,
   Kunwu.Chan


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ