[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20231007153558.GE831234@kernel.org>
Date: Sat, 7 Oct 2023 17:35:58 +0200
From: Simon Horman <horms@...nel.org>
To: Mateusz Polchlopek <mateusz.polchlopek@...el.com>
Cc: intel-wired-lan@...ts.osuosl.org, netdev@...r.kernel.org,
Michal Wilczynski <michal.wilczynski@...el.com>,
Przemek Kitszel <przemyslaw.kitszel@...el.com>
Subject: Re: [Intel-wired-lan] [PATCH iwl-net v2 5/5] ice: Document
tx_scheduling_layers parameter
On Fri, Oct 06, 2023 at 07:02:12AM -0400, Mateusz Polchlopek wrote:
> From: Michal Wilczynski <michal.wilczynski@...el.com>
>
> New driver specific parameter 'tx_scheduling_layers' was introduced.
> Describe parameter in the documentation.
>
> Signed-off-by: Michal Wilczynski <michal.wilczynski@...el.com>
> Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@...el.com>
> Co-developed-by: Mateusz Polchlopek <mateusz.polchlopek@...el.com>
> Signed-off-by: Mateusz Polchlopek <mateusz.polchlopek@...el.com>
Hi,
I'm not expert here,
but this seems to cause a splat when building documentation.
.../ice.rst:70: WARNING: Unexpected indentation.
.../ice.rst:25: WARNING: Error parsing content block for the "list-table" directive: uniform two-level bullet list expected, but row 2 does not contain the same number of items as row 1 (3 vs 4).
.. list-table:: Driver-specific parameters implemented
:widths: 5 5 5 85
* - Name
- Type
- Mode
- Description
* - ``tx_scheduling_layers``
- u8
- permanent
The ice hardware uses hierarchical scheduling for Tx with a fixed
number of layers in the scheduling tree. Root node is representing a
port, while all the leaves represents the queues. This way of
configuring Tx scheduler allows features like DCB or devlink-rate
(documented below) for fine-grained configuration how much BW is given
to any given queue or group of queues, as scheduling parameters can be
configured at any given layer of the tree. By default 9-layer tree
topology was deemed best for most workloads, as it gives optimal
performance to configurability ratio. However for some specific cases,
this might not be the case. A great example would be sending traffic to
queues that is not a multiple of 8. Since in 9-layer topology maximum
number of children is limited to 8, the 9th queue has a different parent
than the rest, and it's given more BW credits. This causes a problem
when the system is sending traffic to 9 queues:
| tx_queue_0_packets: 24163396
| tx_queue_1_packets: 24164623
| tx_queue_2_packets: 24163188
| tx_queue_3_packets: 24163701
| tx_queue_4_packets: 24163683
| tx_queue_5_packets: 24164668
| tx_queue_6_packets: 23327200
| tx_queue_7_packets: 24163853
| tx_queue_8_packets: 91101417 < Too much traffic is sent to 9th
Sometimes this might be a big concern, so the idea is to empower the
user to switch to 5-layer topology, enabling performance gains but
sacrificing configurability for features like DCB and devlink-rate.
This parameter gives user flexibility to choose the 5-layer transmit
scheduler topology. After switching parameter reboot is required for
the feature to start working.
User could choose 9 (the default) or 5 as a value of parameter, e.g.:
$ devlink dev param set pci/0000:16:00.0 name tx_scheduling_layers
value 5 cmode permanent
And verify that value has been set:
$ devlink dev param show pci/0000:16:00.0 name tx_scheduling_layers
Powered by blists - more mailing lists