[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <a276b92b-43b4-41ab-9861-4f7284298fe8@linutronix.de>
Date: Thu, 5 Sep 2024 17:22:27 +0200
From: Florian Kauer <florian.kauer@...utronix.de>
To: Toke Høiland-Jørgensen <toke@...hat.com>,
Alexei Starovoitov <ast@...nel.org>, Daniel Borkmann <daniel@...earbox.net>,
"David S. Miller" <davem@...emloft.net>, Jakub Kicinski <kuba@...nel.org>,
Jesper Dangaard Brouer <hawk@...nel.org>,
John Fastabend <john.fastabend@...il.com>,
Andrii Nakryiko <andrii@...nel.org>, Martin KaFai Lau
<martin.lau@...ux.dev>, Eduard Zingerman <eddyz87@...il.com>,
Song Liu <song@...nel.org>, Yonghong Song <yonghong.song@...ux.dev>,
KP Singh <kpsingh@...nel.org>, Stanislav Fomichev <sdf@...ichev.me>,
Hao Luo <haoluo@...gle.com>, Jiri Olsa <jolsa@...nel.org>,
David Ahern <dsahern@...nel.org>
Cc: netdev@...r.kernel.org, bpf@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH net] bpf: devmap: provide rxq after redirect
On 9/5/24 17:15, Toke Høiland-Jørgensen wrote:
> Florian Kauer <florian.kauer@...utronix.de> writes:
>
>> rxq contains a pointer to the device from where
>> the redirect happened. Currently, the BPF program
>> that was executed after a redirect via BPF_MAP_TYPE_DEVMAP*
>> does not have it set.
>>
>> This is particularly bad since accessing ingress_ifindex, e.g.
>>
>> SEC("xdp")
>> int prog(struct xdp_md *pkt)
>> {
>> return bpf_redirect_map(&dev_redirect_map, 0, 0);
>> }
>>
>> SEC("xdp/devmap")
>> int prog_after_redirect(struct xdp_md *pkt)
>> {
>> bpf_printk("ifindex %i", pkt->ingress_ifindex);
>> return XDP_PASS;
>> }
>>
>> depends on access to rxq, so a NULL pointer gets dereferenced:
>>
>> <1>[ 574.475170] BUG: kernel NULL pointer dereference, address: 0000000000000000
>> <1>[ 574.475188] #PF: supervisor read access in kernel mode
>> <1>[ 574.475194] #PF: error_code(0x0000) - not-present page
>> <6>[ 574.475199] PGD 0 P4D 0
>> <4>[ 574.475207] Oops: Oops: 0000 [#1] PREEMPT SMP NOPTI
>> <4>[ 574.475217] CPU: 4 UID: 0 PID: 217 Comm: kworker/4:1 Not tainted 6.11.0-rc5-reduced-00859-g780801200300 #23
>> <4>[ 574.475226] Hardware name: Intel(R) Client Systems NUC13ANHi7/NUC13ANBi7, BIOS ANRPL357.0026.2023.0314.1458 03/14/2023
>> <4>[ 574.475231] Workqueue: mld mld_ifc_work
>> <4>[ 574.475247] RIP: 0010:bpf_prog_5e13354d9cf5018a_prog_after_redirect+0x17/0x3c
>> <4>[ 574.475257] Code: cc cc cc cc cc cc cc 80 00 00 00 cc cc cc cc cc cc cc cc f3 0f 1e fa 0f 1f 44 00 00 66 90 55 48 89 e5 f3 0f 1e fa 48 8b 57 20 <48> 8b 52 00 8b 92 e0 00 00 00 48 bf f8 a6 d5 c4 5d a0 ff ff be 0b
>> <4>[ 574.475263] RSP: 0018:ffffa62440280c98 EFLAGS: 00010206
>> <4>[ 574.475269] RAX: ffffa62440280cd8 RBX: 0000000000000001 RCX: 0000000000000000
>> <4>[ 574.475274] RDX: 0000000000000000 RSI: ffffa62440549048 RDI: ffffa62440280ce0
>> <4>[ 574.475278] RBP: ffffa62440280c98 R08: 0000000000000002 R09: 0000000000000001
>> <4>[ 574.475281] R10: ffffa05dc8b98000 R11: ffffa05f577fca40 R12: ffffa05dcab24000
>> <4>[ 574.475285] R13: ffffa62440280ce0 R14: ffffa62440549048 R15: ffffa62440549000
>> <4>[ 574.475289] FS: 0000000000000000(0000) GS:ffffa05f4f700000(0000) knlGS:0000000000000000
>> <4>[ 574.475294] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
>> <4>[ 574.475298] CR2: 0000000000000000 CR3: 000000025522e000 CR4: 0000000000f50ef0
>> <4>[ 574.475303] PKRU: 55555554
>> <4>[ 574.475306] Call Trace:
>> <4>[ 574.475313] <IRQ>
>> <4>[ 574.475318] ? __die+0x23/0x70
>> <4>[ 574.475329] ? page_fault_oops+0x180/0x4c0
>> <4>[ 574.475339] ? skb_pp_cow_data+0x34c/0x490
>> <4>[ 574.475346] ? kmem_cache_free+0x257/0x280
>> <4>[ 574.475357] ? exc_page_fault+0x67/0x150
>> <4>[ 574.475368] ? asm_exc_page_fault+0x26/0x30
>> <4>[ 574.475381] ? bpf_prog_5e13354d9cf5018a_prog_after_redirect+0x17/0x3c
>> <4>[ 574.475386] bq_xmit_all+0x158/0x420
>> <4>[ 574.475397] __dev_flush+0x30/0x90
>> <4>[ 574.475407] veth_poll+0x216/0x250 [veth]
>> <4>[ 574.475421] __napi_poll+0x28/0x1c0
>> <4>[ 574.475430] net_rx_action+0x32d/0x3a0
>> <4>[ 574.475441] handle_softirqs+0xcb/0x2c0
>> <4>[ 574.475451] do_softirq+0x40/0x60
>> <4>[ 574.475458] </IRQ>
>> <4>[ 574.475461] <TASK>
>> <4>[ 574.475464] __local_bh_enable_ip+0x66/0x70
>> <4>[ 574.475471] __dev_queue_xmit+0x268/0xe40
>> <4>[ 574.475480] ? selinux_ip_postroute+0x213/0x420
>> <4>[ 574.475491] ? alloc_skb_with_frags+0x4a/0x1d0
>> <4>[ 574.475502] ip6_finish_output2+0x2be/0x640
>> <4>[ 574.475512] ? nf_hook_slow+0x42/0xf0
>> <4>[ 574.475521] ip6_finish_output+0x194/0x300
>> <4>[ 574.475529] ? __pfx_ip6_finish_output+0x10/0x10
>> <4>[ 574.475538] mld_sendpack+0x17c/0x240
>> <4>[ 574.475548] mld_ifc_work+0x192/0x410
>> <4>[ 574.475557] process_one_work+0x15d/0x380
>> <4>[ 574.475566] worker_thread+0x29d/0x3a0
>> <4>[ 574.475573] ? __pfx_worker_thread+0x10/0x10
>> <4>[ 574.475580] ? __pfx_worker_thread+0x10/0x10
>> <4>[ 574.475587] kthread+0xcd/0x100
>> <4>[ 574.475597] ? __pfx_kthread+0x10/0x10
>> <4>[ 574.475606] ret_from_fork+0x31/0x50
>> <4>[ 574.475615] ? __pfx_kthread+0x10/0x10
>> <4>[ 574.475623] ret_from_fork_asm+0x1a/0x30
>> <4>[ 574.475635] </TASK>
>> <4>[ 574.475637] Modules linked in: veth br_netfilter bridge stp llc iwlmvm x86_pkg_temp_thermal iwlwifi efivarfs nvme nvme_core
>> <4>[ 574.475662] CR2: 0000000000000000
>> <4>[ 574.475668] ---[ end trace 0000000000000000 ]---
>
> Yikes! I wonder how that has gone unnoticed this long. Could you please
> add a selftest for this so it doesn't happen again?
Yes, I will try to construct one.
>
>> Therefore, provide it to the program by setting rxq properly.
>>
>> Fixes: fbee97feed9b ("bpf: Add support to attach bpf program to a devmap entry")
>
> I think the fixes tag is wrong; the original commit was fine because the
> xdp_buff was still the one initialised by the driver. So this should be:
>
> Fixes: cb261b594b41 ("bpf: Run devmap xdp_prog on flush instead of bulk enqueue")
Oh, I agree. On first glance it looked like the old dev_map_run_prog
was also only setting txq, but didn't notice that it was just overwriting
in the original xdp_buff.
>
> Other than that, though, the patch LGTM:
>
> Reviewed-by: Toke Høiland-Jørgensen <toke@...hat.com>
>
Thanks,
Florian
Powered by blists - more mailing lists