lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <57fb7b81-2ba9-4899-a110-10b4e3e637c9@blackwall.org>
Date: Wed, 22 Oct 2025 16:12:33 +0300
From: Nikolay Aleksandrov <razor@...ckwall.org>
To: Daniel Borkmann <daniel@...earbox.net>, netdev@...r.kernel.org
Cc: bpf@...r.kernel.org, kuba@...nel.org, davem@...emloft.net,
 pabeni@...hat.com, willemb@...gle.com, sdf@...ichev.me,
 john.fastabend@...il.com, martin.lau@...nel.org, jordan@...fe.io,
 maciej.fijalkowski@...el.com, magnus.karlsson@...el.com, dw@...idwei.uk,
 toke@...hat.com, yangzhenze@...edance.com, wangdongdong.6@...edance.com
Subject: Re: [PATCH net-next v3 14/15] netkit: Add io_uring zero-copy support
 for TCP

On 10/20/25 19:23, Daniel Borkmann wrote:
> From: David Wei <dw@...idwei.uk>
> 
> This adds the last missing bit to netkit for supporting io_uring with
> zero-copy mode [0]. Up until this point it was not possible to consume
> the latter out of containers or Kubernetes Pods where applications are
> in their own network namespace.
> 
> Thus, as a last missing bit, implement ndo_queue_get_dma_dev() in netkit
> to return the physical device of the real rxq for DMA. This allows memory
> providers like io_uring zero-copy or devmem to bind to the physically
> mapped rxq in netkit.
> 
> io_uring example with eth0 being a physical device with 16 queues where
> netkit is bound to the last queue, iou-zcrx.c is binary from selftests.
> Flow steering to that queue is based on the service VIP:port of the
> server utilizing io_uring:
> 
>   # ethtool -X eth0 start 0 equal 15
>   # ethtool -X eth0 start 15 equal 1 context new
>   # ethtool --config-ntuple eth0 flow-type tcp4 dst-ip 1.2.3.4 dst-port 5000 action 15
>   # ip netns add foo
>   # ip link add type netkit peer numrxqueues 2
>   # ./pyynl/cli.py --spec ~/netlink/specs/netdev.yaml \
>                    --do bind-queue \
>                    --json "{"src-ifindex": $(ifindex eth0), "src-queue-id": 15, \
>                             "dst-ifindex": $(ifindex nk0), "queue-type": "rx"}"
>   {'dst-queue-id': 1}
>   # ip link set nk0 netns foo
>   # ip link set nk1 up
>   # ip netns exec foo ip link set lo up
>   # ip netns exec foo ip link set nk0 up
>   # ip netns exec foo ip addr add 1.2.3.4/32 dev nk0
>   [ ... setup routing etc to get external traffic into the netns ... ]
>   # ip netns exec foo ./iou-zcrx -s -p 5000 -i nk0 -q 1
> 
> Remote io_uring client:
> 
>   # ./iou-zcrx -c -h 1.2.3.4 -p 5000 -l 12840 -z 65536
> 
> We have tested the above against a Broadcom BCM957504 (bnxt_en)
> 100G NIC, supporting TCP header/data split.
> 
> Similarly, this also works for devmem which we tested using ncdevmem:
> 
>   # ip netns exec foo ./ncdevmem -s 1.2.3.4 -l -p 5000 -f nk0 -t 1 -q 1
> 
> And on the remote client:
> 
>   # ./ncdevmem -s 1.2.3.4 -p 5000 -f eth0
> 
> For Cilium, the plan is to open up support for the various memory providers
> for regular Kubernetes Pods when Cilium is configured with netkit datapath
> mode.
> 
> Signed-off-by: David Wei <dw@...idwei.uk>
> Co-developed-by: Daniel Borkmann <daniel@...earbox.net>
> Signed-off-by: Daniel Borkmann <daniel@...earbox.net>
> Link: https://kernel-recipes.org/en/2024/schedule/efficient-zero-copy-networking-using-io_uring [0]
> ---
>  drivers/net/netkit.c | 18 +++++++++++++++++-
>  1 file changed, 17 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/net/netkit.c b/drivers/net/netkit.c
> index 75b57496b72e..a281b39a1047 100644
> --- a/drivers/net/netkit.c
> +++ b/drivers/net/netkit.c
> @@ -282,6 +282,21 @@ static const struct ethtool_ops netkit_ethtool_ops = {
>  	.get_channels		= netkit_get_channels,
>  };
>  
> +static struct device *netkit_queue_get_dma_dev(struct net_device *dev, int idx)
> +{
> +	struct netdev_rx_queue *rxq, *peer_rxq;
> +	unsigned int peer_idx;
> +
> +	rxq = __netif_get_rx_queue(dev, idx);
> +	if (!rxq->peer)
> +		return NULL;
> +
> +	peer_rxq = rxq->peer;
> +	peer_idx = get_netdev_rx_queue_index(peer_rxq);
> +
> +	return netdev_queue_get_dma_dev(peer_rxq->dev, peer_idx);
> +}
> +
>  static int netkit_queue_create(struct net_device *dev)
>  {
>  	struct netkit *nk = netkit_priv(dev);
> @@ -307,7 +322,8 @@ static int netkit_queue_create(struct net_device *dev)
>  }
>  
>  static const struct netdev_queue_mgmt_ops netkit_queue_mgmt_ops = {
> -	.ndo_queue_create = netkit_queue_create,
> +	.ndo_queue_get_dma_dev		= netkit_queue_get_dma_dev,
> +	.ndo_queue_create		= netkit_queue_create,
>  };
>  
>  static struct net_device *netkit_alloc(struct nlattr *tb[],

Reviewed-by: Nikolay Aleksandrov <razor@...ckwall.org>


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ