[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <1935291.tdWV9SEqCh@7950hx>
Date: Sat, 11 Oct 2025 21:17:24 +0800
From: Menglong Dong <menglong.dong@...ux.dev>
To: Sahil Chandna <chandna.linuxkernel@...il.com>
Cc: ast@...nel.org, daniel@...earbox.net, andrii@...nel.org,
martin.lau@...ux.dev, song@...nel.org, john.fastabend@...il.com,
haoluo@...gle.com, jolsa@...nel.org, bpf@...r.kernel.org,
netdev@...r.kernel.org, david.hunter.linux@...il.com,
skhan@...uxfoundation.org, khalid@...nel.org, chandna.linuxkernel@...il.com,
syzbot+1f1fbecb9413cdbfbef8@...kaller.appspotmail.com
Subject:
Re: [PATCH v2] bpf: test_run: Use migrate_enable()/disable() universally
On 2025/10/10 15:59, Sahil Chandna wrote:
> The timer context can safely use migrate_disable()/migrate_enable()
> universally instead of conditional preemption or migration disabling.
> Previously, the timer was initialized in NO_PREEMPT mode by default,
> which disabled preemption and forced execution in atomic context.
> This caused issues on PREEMPT_RT configurations when invoking
> spin_lock_bh() — a sleeping lock — leading to the following warning:
>
> BUG: sleeping function called from invalid context at kernel/locking/spinlock_rt.c:48
> in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 6107, name: syz.0.17
> preempt_count: 1, expected: 0
> RCU nest depth: 1, expected: 1
> Preemption disabled at:
> [<ffffffff891fce58>] bpf_test_timer_enter+0xf8/0x140 net/bpf/test_run.c:42
>
> Reported-by: syzbot+1f1fbecb9413cdbfbef8@...kaller.appspotmail.com
> Closes: https://syzkaller.appspot.com/bug?extid=1f1fbecb9413cdbfbef8
> Tested-by: syzbot+1f1fbecb9413cdbfbef8@...kaller.appspotmail.com
> Signed-off-by: Sahil Chandna <chandna.linuxkernel@...il.com>
>
> ---
> Link to v1: https://lore.kernel.org/all/20251006054320.159321-1-chandna.linuxkernel@gmail.com/
>
> Changes since v1:
> - Dropped `enum { NO_PREEMPT, NO_MIGRATE } mode` from `struct bpf_test_timer`.
> - Removed all conditional preempt/migrate disable logic.
> - Unified timer handling to use `migrate_disable()` / `migrate_enable()` universally.
>
> Testing:
> - Reproduced syzbot bug locally using the provided reproducer.
> - Observed `BUG: sleeping function called from invalid context` on v1.
> - Confirmed bug disappears after applying this patch.
> - Validated normal functionality of `bpf_prog_test_run_*` helpers with C
> reproducer.
> ---
> net/bpf/test_run.c | 20 ++++++--------------
> 1 file changed, 6 insertions(+), 14 deletions(-)
>
> diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
> index dfb03ee0bb62..b23bc93e738e 100644
> --- a/net/bpf/test_run.c
> +++ b/net/bpf/test_run.c
> @@ -29,7 +29,6 @@
> #include <trace/events/bpf_test_run.h>
>
> struct bpf_test_timer {
> - enum { NO_PREEMPT, NO_MIGRATE } mode;
> u32 i;
> u64 time_start, time_spent;
> };
> @@ -38,10 +37,7 @@ static void bpf_test_timer_enter(struct bpf_test_timer *t)
> __acquires(rcu)
> {
> rcu_read_lock();
> - if (t->mode == NO_PREEMPT)
> - preempt_disable();
> - else
> - migrate_disable();
> + migrate_disable();
Maybe we can use rcu_read_lock_dont_migrate/rcu_read_unlock_migrate
here instead, which has better performance :)
Thanks!
Menglong Dong
>
> t->time_start = ktime_get_ns();
> }
> @@ -50,11 +46,7 @@ static void bpf_test_timer_leave(struct bpf_test_timer *t)
> __releases(rcu)
> {
> t->time_start = 0;
> -
> - if (t->mode == NO_PREEMPT)
> - preempt_enable();
> - else
> - migrate_enable();
> + migrate_enable();
> rcu_read_unlock();
> }
>
> @@ -374,7 +366,7 @@ static int bpf_test_run_xdp_live(struct bpf_prog *prog, struct xdp_buff *ctx,
>
> {
> struct xdp_test_data xdp = { .batch_size = batch_size };
> - struct bpf_test_timer t = { .mode = NO_MIGRATE };
> + struct bpf_test_timer t;
> int ret;
>
> if (!repeat)
> @@ -404,7 +396,7 @@ static int bpf_test_run(struct bpf_prog *prog, void *ctx, u32 repeat,
> struct bpf_prog_array_item item = {.prog = prog};
> struct bpf_run_ctx *old_ctx;
> struct bpf_cg_run_ctx run_ctx;
> - struct bpf_test_timer t = { NO_MIGRATE };
> + struct bpf_test_timer t;
> enum bpf_cgroup_storage_type stype;
> int ret;
>
> @@ -1377,7 +1369,7 @@ int bpf_prog_test_run_flow_dissector(struct bpf_prog *prog,
> const union bpf_attr *kattr,
> union bpf_attr __user *uattr)
> {
> - struct bpf_test_timer t = { NO_PREEMPT };
> + struct bpf_test_timer t;
> u32 size = kattr->test.data_size_in;
> struct bpf_flow_dissector ctx = {};
> u32 repeat = kattr->test.repeat;
> @@ -1445,7 +1437,7 @@ int bpf_prog_test_run_flow_dissector(struct bpf_prog *prog,
> int bpf_prog_test_run_sk_lookup(struct bpf_prog *prog, const union bpf_attr *kattr,
> union bpf_attr __user *uattr)
> {
> - struct bpf_test_timer t = { NO_PREEMPT };
> + struct bpf_test_timer t;
> struct bpf_prog_array *progs = NULL;
> struct bpf_sk_lookup_kern ctx = {};
> u32 repeat = kattr->test.repeat;
>
Powered by blists - more mailing lists