lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5d139efa-c78e-4323-b79d-bbf566ac19b8@iogearbox.net>
Date: Wed, 24 Sep 2025 12:41:00 +0200
From: Daniel Borkmann <daniel@...earbox.net>
To: Toke Høiland-Jørgensen <toke@...hat.com>,
 netdev@...r.kernel.org
Cc: bpf@...r.kernel.org, kuba@...nel.org, davem@...emloft.net,
 razor@...ckwall.org, pabeni@...hat.com, willemb@...gle.com, sdf@...ichev.me,
 john.fastabend@...il.com, martin.lau@...nel.org, jordan@...fe.io,
 maciej.fijalkowski@...el.com, magnus.karlsson@...el.com,
 David Wei <dw@...idwei.uk>
Subject: Re: [PATCH net-next 19/20] netkit: Add xsk support for af_xdp
 applications

On 9/23/25 1:42 PM, Toke Høiland-Jørgensen wrote:
> Daniel Borkmann <daniel@...earbox.net> writes:
> 
>> Enable support for AF_XDP applications to operate on a netkit device.
>> The goal is that AF_XDP applications can natively consume AF_XDP
>> from network namespaces. The use-case from Cilium side is to support
>> Kubernetes KubeVirt VMs through QEMU's AF_XDP backend. KubeVirt is a
>> virtual machine management add-on for Kubernetes which aims to provide
>> a common ground for virtualization. KubeVirt spawns the VMs inside
>> Kubernetes Pods which reside in their own network namespace just like
>> regular Pods.
>>
>> Raw QEMU AF_XDP backend example with eth0 being a physical device with
>> 16 queues where netkit is bound to the last queue (for multi-queue RSS
>> context can be used if supported by the driver):
>>
>>    # ethtool -X eth0 start 0 equal 15
>>    # ethtool -X eth0 start 15 equal 1 context new
>>    # ethtool --config-ntuple eth0 flow-type ether \
>>              src 00:00:00:00:00:00 \
>>              src-mask ff:ff:ff:ff:ff:ff \
>>              dst $mac dst-mask 00:00:00:00:00:00 \
>>              proto 0 proto-mask 0xffff action 15
>>    # ip netns add foo
>>    # ip link add numrxqueues 2 nk type netkit single
>>    # ynl-bind eth0 15 nk
>>    # ip link set nk netns foo
>>    # ip netns exec foo ip link set lo up
>>    # ip netns exec foo ip link set nk up
>>    # ip netns exec foo qemu-system-x86_64 \
>>            -kernel $kernel \
>>            -drive file=${image_name},index=0,media=disk,format=raw \
>>            -append "root=/dev/sda rw console=ttyS0" \
>>            -cpu host \
>>            -m $memory \
>>            -enable-kvm \
>>            -device virtio-net-pci,netdev=net0,mac=$mac \
>>            -netdev af-xdp,ifname=nk,id=net0,mode=native,queues=1,start-queue=1,inhibit=on,map-path=$dir/xsks_map \
>>            -nographic
> 
> So AFAICT, this example relies on the control plane installing an XDP
> program on the physical NIC which will redirect into the right socket;
> and since in this example, qemu will install the XSK socket at index 1
> in the xsk map, that XDP program will also need to be aware of the queue
> index mapping. I can see from your qemu commit[0] that there's support
> on the qemu side for specifying an offset into the map to avoid having
> to do this translation in the XDP program, but at the very least that
> makes this example incomplete, no?
> 
> However, even with a complete example, this breaks isolation in the
> sense that the entire XSK map is visible inside the pod, so a
> misbehaving qemu could interfere with traffic on other queues (by
> clearing the map, say). Which seems less than ideal?

For getting to a first starting point to connect all things with KubeVirt,
bind mounting the xsk map from Cilium into the VM launcher Pod which acts
as a regular K8s Pod while not perfect, its not a big issue given its out
of reach from the application sitting inside the VM (and some of the
control plane aspects are baked in the launcher Pod already), so the
isolation barrier is still VM. Eventually my goal is to have a xdp/xsk
redirect extension where we don't need to have the xsk map, and can just
derive the target xsk through the rxq we received traffic on.

> Taking a step back, for AF_XDP we already support decoupling the
> application-side access to the redirected packets from the interface,
> through the use of sockets. Meaning that your use case here could just
> as well be served by the control plane setting up AF_XDP socket(s) on
> the physical NIC and passing those into qemu, in which case we don't
> need this whole queue proxying dance at all.

Cilium should not act as a proxy handing out xsk sockets. Existing
applications expect a netdev from kernel side and should not need to
rewrite just to implement one CNI's protocol. Also, all the memory
should not be accounted against Cilium but rather the application Pod
itself which is consuming af_xdp. Further, on up/downgrades we expect
the data plane to being completely decoupled from the control plane,
if Cilium would own the sockets that would be disruptive which is nogo.

> So, erm, what am I missing that makes this worth it (for AF_XDP; I can
> see how it is useful for other things)? :)
Yeap there are other use cases we've seen from Cilium users as well,
e.g. running dpdk applications on top of af_xdp in regular k8s Pods.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ