lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250609095824.414cffa1@kernel.org>
Date: Mon, 9 Jun 2025 09:58:24 -0700
From: Jakub Kicinski <kuba@...nel.org>
To: Bui Quang Minh <minhquangbui99@...il.com>
Cc: Paolo Abeni <pabeni@...hat.com>, netdev@...r.kernel.org, "Michael S.
 Tsirkin" <mst@...hat.com>, Jason Wang <jasowang@...hat.com>, Xuan Zhuo
 <xuanzhuo@...ux.alibaba.com>, Eugenio Pérez
 <eperezma@...hat.com>, Andrew Lunn <andrew+netdev@...n.ch>, "David S.
 Miller" <davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>, Alexei
 Starovoitov <ast@...nel.org>, Daniel Borkmann <daniel@...earbox.net>,
 Jesper Dangaard Brouer <hawk@...nel.org>, John Fastabend
 <john.fastabend@...il.com>, virtualization@...ts.linux.dev,
 linux-kernel@...r.kernel.org, bpf@...r.kernel.org, stable@...r.kernel.org
Subject: Re: [PATCH net] virtio-net: drop the multi-buffer XDP packet in
 zerocopy

On Fri, 6 Jun 2025 22:48:53 +0700 Bui Quang Minh wrote:
> >> But currently, if a multi-buffer packet arrives, it will not go through
> >> XDP program so it doesn't increase the stats but still goes to network
> >> stack. So I think it's not a correct behavior.  
> > Sounds fair, but at a glance the normal XDP path seems to be trying to
> > linearize the frame. Can we not try to flatten the frame here?
> > If it's simply to long for the chunk size that's a frame length error,
> > right?  
> 
> Here we are in the zerocopy path, so the buffers for the frame to fill 
> in are allocated from XDP socket's umem. And if the frame spans across 
> multiple buffers then the total frame size is larger than the chunk 
> size.

Is that always the case? Can the multi-buf not be due to header-data
split of the incoming frame? (I'm not familiar with the virtio spec)

> Furthermore, we are in the zerocopy so we cannot linearize by 
> allocating a large enough buffer to cover the whole frame then copy the 
> frame data to it. That's not zerocopy anymore. Also, XDP socket zerocopy 
> receive has assumption that the packet it receives must from the umem 
> pool. AFAIK, the generic XDP path is for copy mode only.

Generic XDP == do_xdp_generic(), here I think you mean the normal XDP
patch in the virtio driver? If so then no, XDP is very much not
expected to copy each frame before processing.

This is only slightly related to you patch but while we talk about
multi-buf - in the netdev CI the test which sends ping while XDP
multi-buf program is attached is really flaky :(
https://netdev.bots.linux.dev/contest.html?executor=vmksft-drv-hw&test=ping-py.ping-test-xdp-native-mb&ld-cases=1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ