[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aBy_vW9AixQ4nREM@localhost.localdomain>
Date: Thu, 8 May 2025 16:29:17 +0200
From: Michal Kubiak <michal.kubiak@...el.com>
To: Jesse Brandeburg <jbrandeburg@...udflare.com>
CC: <intel-wired-lan@...ts.osuosl.org>, <maciej.fijalkowski@...el.com>,
<aleksander.lobakin@...el.com>, <przemyslaw.kitszel@...el.com>,
<dawid.osuchowski@...ux.intel.com>, <jacob.e.keller@...el.com>,
<netdev@...r.kernel.org>, <kernel-team@...udflare.com>
Subject: Re: [PATCH iwl-net 0/3] Fix XDP loading on machines with many CPUs
On Wed, May 07, 2025 at 10:51:47PM -0700, Jesse Brandeburg wrote:
> On 5/7/25 1:00 AM, Michal Kubiak wrote:
> > On Tue, May 06, 2025 at 10:31:59PM -0700, Jesse Brandeburg wrote:
> > > On 4/22/25 8:36 AM, Michal Kubiak wrote:
> > > > Hi,
> > > >
> > > > Some of our customers have reported a crash problem when trying to load
> > > > the XDP program on machines with a large number of CPU cores. After
> > > > extensive debugging, it became clear that the root cause of the problem
> > > > lies in the Tx scheduler implementation, which does not seem to be able
> > > > to handle the creation of a large number of Tx queues (even though this
> > > > number does not exceed the number of available queues reported by the
> > > > FW).
> > > > This series addresses this problem.
> > >
> > > Hi Michal,
> > >
> > > Unfortunately this version of the series seems to reintroduce the original
> > > problem error: -22.
> > Hi Jesse,
> >
> > Thanks for testing and reporting!
> >
> > I will take a look at the problem and try to reproduce it locally. I would also
> > have a few questions inline.
> >
> > First, was your original problem not the failure with error: -5? Or did you have
> > both (-5 and -22), depending on the scenario/environment?
> > I am asking because it seems that these two errors occurred at different
> > initialization stages of the tx scheduler. Of course, the series
> > was intended to address both of these issues.
>
>
> We had a few issues to work through, I believe the original problem we had
> was -22 (just confirmed) with more than 320 CPUs.
>
OK. In fact, there were a few problems in the Tx scheduler
implementation, and the error code was dependent on the queue number.
Just different part of the scheduler code was a bottleneck, and the
error could be returned from different functions.
> > > I double checked the patches, they looked like they were applied in our test
> > > version 2025.5.8 build which contained a 6.12.26 kernel with this series
> > > applied (all 3)
> > >
> > > Our setup is saying max 252 combined queues, but running 384 CPUs by
> > > default, loads an XDP program, then reduces the number of queues using
> > > ethtool, to 192. After that we get the error -22 and link is down.
> > >
> > To be honest, I did not test the scenario in which the number of queues is
> > reduced while the XDP program is running. This is the first thing I will check.
>
> Cool, I hope it will help your repro, but see below.
>
I can now confirm that this is the problematic scenario. I have successfully
reproduced it locally with both the draft and current versions of my series.
Also, if I reverse the order of the calls (change the queue number first,
and then load the XDP program), I do not have a problem.
The good news is that I seem to have already found the root cause of the problem
and have a draft fix that appears to work.
During debugging, I realized that the flow of rebuilding the Tx scheduler tree
in case of an `ethtool -L` call is different from adding XDP Tx queues
(when the program is loaded).
When the `ethtool -L` command is called, the VSI is rebuilt including
the Tx scheduler. This means that all the scheduler nodes associated with the VSI
are removed and added again.
The point of my series was to create a way to add additional "VSI support nodes"
(if needed) to handle the high number of Tx queues. Although I modified
the algorithm for adding nodes, I did not touch the function for removing them.
As a result, some extra "VSI support nodes" were still present in the tree when
the VSI was rebuilt, so there was no room to add those nodes again after
the VSI was restarted with a different queue number.
> > Can you please confirm that you did that step on both the current
> > and the draft version of the series?
> > It would also be interesting to check what happens if the queue number is reduced
> > before loading the XDP program.
>
> We noticed we had a difference in the testing of draft and current. We have
> a patch against the kernel that was helping us work around this issue, which
> looked like this:
>
> [...]
>
> The module parameter helped us limit the number of vectors, which allowed
> our machines to finish booting before your new patches were available.
>
> The failure of the new patch was when this value was set to 252, and the
> "draft" patch also fails in this configuration (this is new info from today)
>
>
> > > The original version you had sent us was working fine when we tested it, so
> > > the problem seems to be between those two versions. I suppose it could be
> So the problem is also related to the inital number of queues the driver
> starts with. When we
> > > possible (but unlikely because I used git to apply the patches) that there
> > > was something wrong with the source code, but I sincerely doubt it as the
> > > patches had applied cleanly.
> The reason it worked fine was we tested "draft" (and now the new patches
> too) with the module parameter set to 384 queues (with 384 CPUs), or letting
> it default to 128 queues, both worked with the new and old series. 252 seems
> to be some magic failure causing number with both patches, we don't know
> why.
>
During my work on the series, I had a similar observation about some "magic"
limits that changed the behavior (e.g. the error code).
I think it is because increasing the queue count can cause a step change in
the capacity of the Tx scheduler tree.
For example, if the current VSI nodes in the tree are already full, adding just
one more Tx queue triggers the insertion of another "VSI support node" that can
handle 512 more Tx queues.
I guess for 384 queues you could have more (almost empty) VSI support nodes in
the tree which could handle more queues after calling `ethtool -L`. So in such
a case the problem of not freeing some nodes might be masked.
>
> Thanks for your patience while we worked through the testing differences
> here today. Hope this helps and let me know if you have more questions.
>
>
> Jesse
>
Thanks again for reporting this bug, as it seems to have exposed a serious flaw
in the v1 of my fix.
As a next step, I will send the v2 of the series directly to IWL, where
(in the patch #3) I will extend the algorithm for removing VSI nodes (to remove
all nodes related to a given VSI). This seems to help in my local testing.
Thanks,
Michal
Powered by blists - more mailing lists