[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87v7wsmqv4.ffs@tglx>
Date: Tue, 12 Nov 2024 16:08:47 +0100
From: Thomas Gleixner <tglx@...utronix.de>
To: Kunwu Chan <kunwu.chan@...ux.dev>, ast@...nel.org, daniel@...earbox.net,
andrii@...nel.org, martin.lau@...ux.dev, eddyz87@...il.com,
song@...nel.org, yonghong.song@...ux.dev, john.fastabend@...il.com,
kpsingh@...nel.org, sdf@...ichev.me, haoluo@...gle.com, jolsa@...nel.org,
bigeasy@...utronix.de, clrkwllms@...nel.org, rostedt@...dmis.org
Cc: bpf@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-rt-devel@...ts.linux.dev, Kunwu Chan <chentao@...inos.cn>,
syzbot+b506de56cbbb63148c33@...kaller.appspotmail.com
Subject: Re: [PATCH] bpf: Convert lpm_trie::lock to 'raw_spinlock_t'
On Fri, Nov 08 2024 at 14:32, Kunwu Chan wrote:
> When PREEMPT_RT is enabled, 'spinlock_t' becomes preemptible
> and bpf program has owned a raw_spinlock under a interrupt handler,
> which results in invalid lock acquire context.
This explanation is just wrong.
The problem has nothing to do with an interrupt handler. Interrupt
handlers on RT kernels are force threaded.
> __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
> _raw_spin_lock_irqsave+0xd5/0x120 kernel/locking/spinlock.c:162
> trie_delete_elem+0x96/0x6a0 kernel/bpf/lpm_trie.c:462
> bpf_prog_2c29ac5cdc6b1842+0x43/0x47
> bpf_dispatcher_nop_func include/linux/bpf.h:1290 [inline]
> __bpf_prog_run include/linux/filter.h:701 [inline]
> bpf_prog_run include/linux/filter.h:708 [inline]
> __bpf_trace_run kernel/trace/bpf_trace.c:2340 [inline]
> bpf_trace_run1+0x2ca/0x520 kernel/trace/bpf_trace.c:2380
> trace_workqueue_activate_work+0x186/0x1f0 include/trace/events/workqueue.h:59
> __queue_work+0xc7b/0xf50 kernel/workqueue.c:2338
The problematic lock nesting is the work queue pool lock, which is a raw
spinlock.
> @@ -330,7 +330,7 @@ static long trie_update_elem(struct bpf_map *map,
> if (key->prefixlen > trie->max_prefixlen)
> return -EINVAL;
>
> - spin_lock_irqsave(&trie->lock, irq_flags);
> + raw_spin_lock_irqsave(&trie->lock, irq_flags);
>
> /* Allocate and fill a new node */
Making this a raw spinlock moves the problem from the BPF trie code into
the memory allocator. On RT the memory allocator cannot be invoked under
a raw spinlock.
Thanks,
tglx
Powered by blists - more mailing lists