lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b36a7cb6-582b-422d-82ce-98dc8985fd0d@cloudflare.com>
Date: Tue, 6 May 2025 22:31:59 -0700
From: Jesse Brandeburg <jbrandeburg@...udflare.com>
To: Michal Kubiak <michal.kubiak@...el.com>, intel-wired-lan@...ts.osuosl.org
Cc: maciej.fijalkowski@...el.com, aleksander.lobakin@...el.com,
 przemyslaw.kitszel@...el.com, dawid.osuchowski@...ux.intel.com,
 jacob.e.keller@...el.com, netdev@...r.kernel.org, kernel-team@...udflare.com
Subject: Re: [PATCH iwl-net 0/3] Fix XDP loading on machines with many CPUs

On 4/22/25 8:36 AM, Michal Kubiak wrote:
> Hi,
>
> Some of our customers have reported a crash problem when trying to load
> the XDP program on machines with a large number of CPU cores. After
> extensive debugging, it became clear that the root cause of the problem
> lies in the Tx scheduler implementation, which does not seem to be able
> to handle the creation of a large number of Tx queues (even though this
> number does not exceed the number of available queues reported by the
> FW).
> This series addresses this problem.


Hi Michal,

Unfortunately this version of the series seems to reintroduce the 
original problem error: -22.

I double checked the patches, they looked like they were applied in our 
test version 2025.5.8 build which contained a 6.12.26 kernel with this 
series applied (all 3)

Our setup is saying max 252 combined queues, but running 384 CPUs by 
default, loads an XDP program, then reduces the number of queues using 
ethtool, to 192. After that we get the error -22 and link is down.

Sorry to bring some bad news, and I know it took a while, it is a bit of 
a process to test this in our lab.

The original version you had sent us was working fine when we tested it, 
so the problem seems to be between those two versions. I suppose it 
could be possible (but unlikely because I used git to apply the patches) 
that there was something wrong with the source code, but I sincerely 
doubt it as the patches had applied cleanly.

We are only able to test 6.12.y or 6.6.y stable variants of the kernel 
if you want to make a test version of a fixed series for us to try.

Thanks,

Jesse


some dmesg follows:

sudo dmesg | grep -E "ice 0000:c1:00.0|ice:"

[  20.932638] ice: Intel(R) Ethernet Connection E800 Series Linux Driver

[  20.932642] ice: Copyright (c) 2018, Intel Corporation.

[  21.259332] ice 0000:c1:00.0: DDP package does not support Tx 
scheduling layers switching feature - please update to the latest DDP 
package and try again

[  21.552597] ice 0000:c1:00.0: The DDP package was successfully loaded: 
ICE COMMS Package version 1.3.51.0

[  21.610275] ice 0000:c1:00.0: 252.048 Gb/s available PCIe bandwidth 
(16.0 GT/s PCIe x16 link)

[  21.623960] ice 0000:c1:00.0: RDMA is not supported on this device

[  21.672421] ice 0000:c1:00.0: DCB is enabled in the hardware, max 
number of TCs supported on this port are 8

[  21.705729] ice 0000:c1:00.0: FW LLDP is disabled, DCBx/LLDP in SW mode.

[  21.722873] ice 0000:c1:00.0: Commit DCB Configuration to the hardware

[  22.086346] ice 0000:c1:00.1: DDP package already present on device: 
ICE COMMS Package version 1.3.51.0

[  22.289956] ice 0000:c1:00.0 ext0: renamed from eth0

[  23.137538] ice 0000:c1:00.0 ext0: NIC Link is up 25 Gbps Full Duplex, 
Requested FEC: RS-FEC, Negotiated FEC: NONE, Autoneg Advertised: On, 
Autoneg Negotiated: False, Flow Control: None

*[ 499.643936] ice 0000:c1:00.0: Failed to set LAN Tx queue context, 
error: -22*

*
*


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ