lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <ad6c4448-8fb3-4a5c-91b0-8739f95cf65b@nvidia.com>
Date: Mon, 1 Dec 2025 11:12:55 +0100
From: Dragos Tatulea <dtatulea@...dia.com>
To: Jesper Dangaard Brouer <hawk@...nel.org>, Jakub Kicinski
 <kuba@...nel.org>, Andrew Lunn <andrew+netdev@...n.ch>,
 "David S. Miller" <davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>,
 Paolo Abeni <pabeni@...hat.com>, Alexei Starovoitov <ast@...nel.org>,
 Daniel Borkmann <daniel@...earbox.net>, Andrii Nakryiko <andrii@...nel.org>,
 Eduard Zingerman <eddyz87@...il.com>, Song Liu <song@...nel.org>,
 Yonghong Song <yonghong.song@...ux.dev>,
 John Fastabend <john.fastabend@...il.com>,
 Stanislav Fomichev <sdf@...ichev.me>, Hao Luo <haoluo@...gle.com>,
 Jiri Olsa <jolsa@...nel.org>, Simon Horman <horms@...nel.org>,
 Toshiaki Makita <toshiaki.makita1@...il.com>,
 David Ahern <dsahern@...nel.org>, Toke Hoiland Jorgensen <toke@...hat.com>
Cc: Tariq Toukan <tariqt@...dia.com>, netdev@...r.kernel.org,
 linux-kernel@...r.kernel.org, bpf@...r.kernel.org,
 Martin KaFai Lau <martin.lau@...ux.dev>, KP Singh <kpsingh@...nel.org>
Subject: Re: [RFC 2/2] xdp: Delegate fast path return decision to page_pool



[...]
>> And then you can run thus command:
>>  sudo ./xdp-bench redirect-map --load-egress mlx5p1 mlx5p1
>>
> Ah, yes! I was ignorant about the egress part of the program.
> That did the trick. The drop happens before reaching the tx
> queue of the second netdev and the mentioned code in devmem.c
> is reached.
> 
> Sender is xdp-trafficgen with 3 threads pushing enough on one RX queue
> to saturate the CPU.
> 
> Here's what I got:
> 
> * before:
> 
> eth2->eth3             16,153,328 rx/s         16,153,329 err,drop/s            0 xmit/s       
>   xmit eth2->eth3               0 xmit/s       16,153,329 drop/s                0 drv_err/s         16.00 bulk-avg     
> eth2->eth3             16,152,538 rx/s         16,152,546 err,drop/s            0 xmit/s       
>   xmit eth2->eth3               0 xmit/s       16,152,546 drop/s                0 drv_err/s         16.00 bulk-avg     
> eth2->eth3             16,156,331 rx/s         16,156,337 err,drop/s            0 xmit/s       
>   xmit eth2->eth3               0 xmit/s       16,156,337 drop/s                0 drv_err/s         16.00 bulk-avg
> 
> * after:
> 
> eth2->eth3             16,105,461 rx/s         16,105,469 err,drop/s            0 xmit/s        
>   xmit eth2->eth3               0 xmit/s       16,105,469 drop/s                0 drv_err/s         16.00 bulk-avg     
> eth2->eth3             16,119,550 rx/s         16,119,541 err,drop/s            0 xmit/s        
>   xmit eth2->eth3               0 xmit/s       16,119,541 drop/s                0 drv_err/s         16.00 bulk-avg     
> eth2->eth3             16,092,145 rx/s         16,092,154 err,drop/s            0 xmit/s        
>   xmit eth2->eth3               0 xmit/s       16,092,154 drop/s                0 drv_err/s         16.00 bulk-avg
> 
> So slightly worse... I don't fully trust the measurements though as I
> saw the inverse situation in other tests as well: higher rate after the
> patch.
I had a chance to re-run this on a more stable system and the conclusion
is the same. Performance is ~2 % worse:

* before:
eth2->eth3        13,746,431 rx/s   13,746,471 err,drop/s 0 xmit/s    
  xmit eth2->eth3          0 xmit/s 13,746,471 drop/s     0 drv_err/s 16.00 bulk-avg 

* after:
eth2->eth3        13,437,277 rx/s   13,437,259 err,drop/s 0 xmit/s    
  xmit eth2->eth3          0 xmit/s 13,437,259 drop/s     0 drv_err/s 16.00 bulk-avg 

After this experiment it doesn't seem like this direction is worth
proceeding with... I was more optimistic at the start.

>>> Toke (and I) will appreciate if you added code for this to xdp-bench.
>> Supporting a --program-mode like 'redirect-cpu' does.
>>
>>
> Ok. I will add it.
> 
Added it here:
https://github.com/xdp-project/xdp-tools/pull/532

Thanks,
Dragos

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ