lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a8ea949a-ac36-4753-b8ab-f9a85004750a@redhat.com>
Date: Tue, 3 Feb 2026 09:32:21 +0100
From: Paolo Abeni <pabeni@...hat.com>
To: Marek Mietus <mmietus97@...oo.com>, netdev@...r.kernel.org,
 sd@...asysnail.net, kuba@...nel.org
Cc: Jason@...c4.com
Subject: Re: [PATCH net-next v7 00/11] net: tunnel: introduce noref xmit flows
 for tunnels

On 1/27/26 8:04 AM, Marek Mietus wrote:
> Currently, tunnel xmit flows always take a reference on the dst_entry
> for each xmitted packet. These atomic operations are redundant in some
> flows.
> 
> This patchset introduces the infrastructure required for converting
> the tunnel xmit flows to noref, and converts them where possible.
> 
> These changes improve tunnel performance, since less atomic operations
> are used.
> 
> There are already noref optimizations in both ipv4 and ip6.
> (See __ip_queue_xmit, inet6_csk_xmit)
> This patchset implements similar optimizations in ip and udp tunnels.
> 
> Benchmarks:
> I used a vxlan tunnel over a pair of veth peers and measured the average
> throughput over multiple samples.
> 
> I ran 100 samples on a clean build, and another 100 on a patched
> build. Each sample ran for 120 seconds. These were my results:
> 
> clean:      71.95 mb/sec, stddev = 1.71
> patched:    74.92 mb/sec, stddev = 1.35

Which H/W are you using? I've never seen so low figures in this decade,
possibly not even in the previous one. Expected tput on not too
obsoleted H/W is orders of magnitude higher, and we need more relevant
figures.

Thanks,

Paolo


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ