[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251216031018.1615363-1-donglaipang@126.com>
Date: Tue, 16 Dec 2025 11:10:18 +0800
From: donglaipang@....com
To: syzbot+2b3391f44313b3983e91@...kaller.appspotmail.com
Cc: andrii@...nel.org,
ast@...nel.org,
bpf@...r.kernel.org,
daniel@...earbox.net,
davem@...emloft.net,
eddyz87@...il.com,
haoluo@...gle.com,
hawk@...nel.org,
john.fastabend@...il.com,
jolsa@...nel.org,
kpsingh@...nel.org,
kuba@...nel.org,
linux-kernel@...r.kernel.org,
martin.lau@...ux.dev,
netdev@...r.kernel.org,
sdf@...ichev.me,
song@...nel.org,
syzkaller-bugs@...glegroups.com,
yonghong.song@...ux.dev,
DLpang <donglaipang@....com>
Subject: [PATCH] bpf: Fix NULL deref in __list_del_clearprev for flush_node
From: DLpang <donglaipang@....com>
#syz test
Hi,
This patch fixes a NULL pointer dereference in the BPF subsystem that occurs
when __list_del_clearprev() is called on an already-cleared flush_node list_head.
The fix includes two parts:
1. Properly initialize the flush_node list_head during per-CPU bulk queue allocation
using INIT_LIST_HEAD(&bq->flush_node)
2. Add defensive checks before calling __list_del_clearprev() to ensure the node
is actually in the list by checking if (bq->flush_node.prev)
According to the __list_del_clearprev documentation in include/linux/list.h,
'The code that uses this needs to check the node 'prev' pointer instead of calling list_empty()'.
This patch fixes the following syzbot-reported issue:
https://syzkaller.appspot.com/bug?extid=2b3391f44313b3983e91
Reported-by: syzbot+2b3391f44313b3983e91@...kaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=2b3391f44313b3983e91
Signed-off-by: DLpang <donglaipang@....com>
---
kernel/bpf/cpumap.c | 4 +++-
kernel/bpf/devmap.c | 3 ++-
net/xdp/xsk.c | 3 ++-
3 files changed, 7 insertions(+), 3 deletions(-)
diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c
index 703e5df1f4ef..248336df591a 100644
--- a/kernel/bpf/cpumap.c
+++ b/kernel/bpf/cpumap.c
@@ -450,6 +450,7 @@ __cpu_map_entry_alloc(struct bpf_map *map, struct bpf_cpumap_val *value,
for_each_possible_cpu(i) {
bq = per_cpu_ptr(rcpu->bulkq, i);
+ INIT_LIST_HEAD(&bq->flush_node);
bq->obj = rcpu;
}
@@ -737,7 +738,8 @@ static void bq_flush_to_queue(struct xdp_bulk_queue *bq)
bq->count = 0;
spin_unlock(&q->producer_lock);
- __list_del_clearprev(&bq->flush_node);
+ if (bq->flush_node.prev)
+ __list_del_clearprev(&bq->flush_node);
/* Feedback loop via tracepoints */
trace_xdp_cpumap_enqueue(rcpu->map_id, processed, drops, to_cpu);
diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c
index 2625601de76e..7a7347e709cc 100644
--- a/kernel/bpf/devmap.c
+++ b/kernel/bpf/devmap.c
@@ -428,7 +428,8 @@ void __dev_flush(struct list_head *flush_list)
bq_xmit_all(bq, XDP_XMIT_FLUSH);
bq->dev_rx = NULL;
bq->xdp_prog = NULL;
- __list_del_clearprev(&bq->flush_node);
+ if (bq->flush_node.prev)
+ __list_del_clearprev(&bq->flush_node);
}
}
diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
index f093c3453f64..052b8583542d 100644
--- a/net/xdp/xsk.c
+++ b/net/xdp/xsk.c
@@ -406,7 +406,8 @@ void __xsk_map_flush(struct list_head *flush_list)
list_for_each_entry_safe(xs, tmp, flush_list, flush_node) {
xsk_flush(xs);
- __list_del_clearprev(&xs->flush_node);
+ if (xs->flush_node.prev)
+ __list_del_clearprev(&xs->flush_node);
}
}
--
2.43.0
Powered by blists - more mailing lists