[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e9c6903c-e440-46b3-860e-8782bfe4efb2@gmail.com>
Date: Mon, 22 Sep 2025 11:17:58 +0800
From: zf <zf15750701@...il.com>
To: Daniel Borkmann <daniel@...earbox.net>, netdev@...r.kernel.org
Cc: bpf@...r.kernel.org, kuba@...nel.org, davem@...emloft.net,
razor@...ckwall.org, pabeni@...hat.com, willemb@...gle.com, sdf@...ichev.me,
john.fastabend@...il.com, martin.lau@...nel.org, jordan@...fe.io,
maciej.fijalkowski@...el.com, magnus.karlsson@...el.com,
David Wei <dw@...idwei.uk>, yangzhenze@...edance.com,
Dongdong Wang <wangdongdong.6@...edance.com>
Subject: Re: [PATCH net-next 18/20] netkit: Add io_uring zero-copy support for
TCP
在 2025/9/20 05:31, Daniel Borkmann 写道:
> From: David Wei <dw@...idwei.uk>
>
> This adds the last missing bit to netkit for supporting io_uring with
> zero-copy mode [0]. Up until this point it was not possible to consume
> the latter out of containers or Kubernetes Pods where applications are
> in their own network namespace.
>
> Thus, as a last missing bit, implement ndo_queue_get_dma_dev() in netkit
> to return the physical device of the real rxq for DMA. This allows memory
> providers like io_uring zero-copy or devmem to bind to the physically
> mapped rxq in netkit.
>
> io_uring example with eth0 being a physical device with 16 queues where
> netkit is bound to the last queue, iou-zcrx.c is binary from selftests.
> Flow steering to that queue is based on the service VIP:port of the
> server utilizing io_uring:
>
> # ethtool -X eth0 start 0 equal 15
> # ethtool -X eth0 start 15 equal 1 context new
> # ethtool --config-ntuple eth0 flow-type tcp4 dst-ip 1.2.3.4 dst-port 5000 action 15
> # ip netns add foo
> # ip link add numrxqueues 2 type netkit
> # ynl-bind eth0 15 nk0
> # ip link set nk0 netns foo
> # ip link set nk1 up
> # ip netns exec foo ip link set lo up
> # ip netns exec foo ip link set nk0 up
> # ip netns exec foo ip addr add 1.2.3.4/32 dev nk0
> [ ... setup routing etc to get external traffic into the netns ... ]
> # ip netns exec foo ./iou-zcrx -s -p 5000 -i nk0 -q 1
>
> Remote io_uring client:
>
> # ./iou-zcrx -c -h 1.2.3.4 -p 5000 -l 12840 -z 65536
>
> We have tested the above against a dual-port Nvidia ConnectX-6 (mlx5)
> 100G NIC as well as Broadcom BCM957504 (bnxt_en) 100G NIC, both
> supporting TCP header/data split. For Cilium, the plan is to open
> up support for io_uring in zero-copy mode for regular Kubernetes Pods
> when Cilium is configured with netkit datapath mode.
>
From what we have learned, mlx supports TCP header/data split starting
from CX7, relying on the hw rx gro. I would like to ask, can CX6 use TCP
header/data split? Can you share your CX6's mlx driver information and
FW information? I will test it. If CX6 can support, this one is even
better for me. Thanks.
> Signed-off-by: David Wei <dw@...idwei.uk>
> Co-developed-by: Daniel Borkmann <daniel@...earbox.net>
> Signed-off-by: Daniel Borkmann <daniel@...earbox.net>
> Link: https://kernel-recipes.org/en/2024/schedule/efficient-zero-copy-networking-using-io_uring [0]
> ---
> drivers/net/netkit.c | 18 +++++++++++++++++-
> 1 file changed, 17 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/net/netkit.c b/drivers/net/netkit.c
> index 27ff84833f28..5129b27a7c3c 100644
> --- a/drivers/net/netkit.c
> +++ b/drivers/net/netkit.c
> @@ -274,6 +274,21 @@ static const struct ethtool_ops netkit_ethtool_ops = {
> .get_channels = netkit_get_channels,
> };
>
> +static struct device *netkit_queue_get_dma_dev(struct net_device *dev, int idx)
> +{
> + struct netdev_rx_queue *rxq, *peer_rxq;
> + unsigned int peer_idx;
> +
> + rxq = __netif_get_rx_queue(dev, idx);
> + if (!rxq->peer)
> + return NULL;
> +
> + peer_rxq = rxq->peer;
> + peer_idx = get_netdev_rx_queue_index(peer_rxq);
> +
> + return netdev_queue_get_dma_dev(peer_rxq->dev, peer_idx);
> +}
> +
> static int netkit_queue_create(struct net_device *dev)
> {
> struct netkit *nk = netkit_priv(dev);
> @@ -299,7 +314,8 @@ static int netkit_queue_create(struct net_device *dev)
> }
>
> static const struct netdev_queue_mgmt_ops netkit_queue_mgmt_ops = {
> - .ndo_queue_create = netkit_queue_create,
> + .ndo_queue_get_dma_dev = netkit_queue_get_dma_dev,
> + .ndo_queue_create = netkit_queue_create,
> };
>
> static struct net_device *netkit_alloc(struct nlattr *tb[],
Powered by blists - more mailing lists