lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3e324602-a33a-b243-80db-6f6077ca5029@engleder-embedded.com>
Date:   Mon, 23 Jan 2023 22:21:33 +0100
From:   Gerhard Engleder <gerhard@...leder-embedded.com>
To:     Vladimir Oltean <vladimir.oltean@....com>, netdev@...r.kernel.org,
        John Fastabend <john.fastabend@...il.com>
Cc:     "David S. Miller" <davem@...emloft.net>,
        Eric Dumazet <edumazet@...gle.com>,
        Jakub Kicinski <kuba@...nel.org>,
        Paolo Abeni <pabeni@...hat.com>,
        Claudiu Manoil <claudiu.manoil@....com>,
        Camelia Groza <camelia.groza@....com>,
        Xiaoliang Yang <xiaoliang.yang_1@....com>,
        Vinicius Costa Gomes <vinicius.gomes@...el.com>,
        Alexander Duyck <alexander.duyck@...il.com>,
        Kurt Kanzenbach <kurt@...utronix.de>,
        Ferenc Fejes <ferenc.fejes@...csson.com>,
        Tony Nguyen <anthony.l.nguyen@...el.com>,
        Jesse Brandeburg <jesse.brandeburg@...el.com>,
        Jacob Keller <jacob.e.keller@...el.com>
Subject: Re: [RFC PATCH net-next 00/11] ENETC mqprio/taprio cleanup

On 20.01.23 15:15, Vladimir Oltean wrote:
> I realize that this patch set will start a flame war, but there are
> things about the mqprio qdisc that I simply don't understand, so in an
> attempt to explain how I see things should be done, I've made some
> patches to the code. I hope the reviewers will be patient enough with me :)
> 
> I need to touch mqprio because I'm preparing a patch set for Frame
> Preemption (an IEEE 802.1Q feature). A disagreement started with
> Vinicius here:
> https://patchwork.kernel.org/project/netdevbpf/patch/20220816222920.1952936-3-vladimir.oltean@nxp.com/#24976672
> 
> regarding how TX packet prioritization should be handled. Vinicius said
> that for some Intel NICs, prioritization at the egress scheduler stage
> is fundamentally attached to TX queues rather than traffic classes.
> 
> In other words, in the "popular" mqprio configuration documented by him:
> 
> $ tc qdisc replace dev $IFACE parent root handle 100 mqprio \
>        num_tc 3 \
>        map 2 2 1 0 2 2 2 2 2 2 2 2 2 2 2 2 \
>        queues 1@0 1@1 2@2 \
>        hw 0
> 
> there are 3 Linux traffic classes and 4 TX queues. The TX queues are
> organized in strict priority fashion, like this: TXQ 0 has highest prio
> (hardware dequeue precedence for TX scheduler), TXQ 3 has lowest prio.
> Packets classified by Linux to TC 2 are hashed between TXQ 2 and TXQ 3,
> but the hardware has higher precedence for TXQ2 over TXQ 3, and Linux
> doesn't know that.

For my tsnep IP core it is similar, but with reverse priority. TXQ 0 has
the lowest priority (to be used for none real-time traffic). TXQ 1 has
priority over TXQ 0, TXQ 2 has priority over TXQ 1, ... . The number of
TX queues is flexible and depends on the requirements of the real-time
application and the available resources within the FPGA. The priority is
hard coded to save FPGA resources.

> I am surprised by this fact, and this isn't how ENETC works at all.
> For ENETC, we try to prioritize on TCs rather than TXQs, and TC 7 has
> higher priority than TC 7. For us, groups of TXQs that map to the same
> TC have the same egress scheduling priority. It is possible (and maybe
> useful) to have 2 TXQs per TC - one TXQ per CPU). Patch 07/11 tries to
> make that more clear.
> 
> Furthermore (and this is really the biggest point of contention), myself
> and Vinicius have the fundamental disagreement whether the 802.1Qbv
> (taprio) gate mask should be passed to the device driver per TXQ or per
> TC. This is what patch 11/11 is about.

tsnep also expects gate mask per TXQ. This simplifies the hardware
implementation. But it would be no problem if the gate mask would be
passed per TC and the driver is able to transform it to per TXQ.

> Again, I'm not *certain* that my opinion on this topic is correct
> (and it sure is confusing to see such a different approach for Intel).
> But I would appreciate any feedback.

In my opinion it makes sense to add mqprio queue configuration to
TAPRIO. This allows the driver to check if queue assignment and
prioritization makes sense for its device. Currently deep hardware
knowledge is needed to know how it is done right.

Gerhard

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ