[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <875yhyf74c.fsf@nvidia.com>
Date: Thu, 8 Sep 2022 10:27:58 +0200
From: Petr Machata <petrm@...dia.com>
To: Vladimir Oltean <vladimir.oltean@....com>
CC: "Daniel.Machon@...rochip.com" <Daniel.Machon@...rochip.com>,
"Allan.Nielsen@...rochip.com" <Allan.Nielsen@...rochip.com>,
"kuba@...nel.org" <kuba@...nel.org>,
"petrm@...dia.com" <petrm@...dia.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"vinicius.gomes@...el.com" <vinicius.gomes@...el.com>,
"thomas.petazzoni@...tlin.com" <thomas.petazzoni@...tlin.com>,
"maxime.chevallier@...tlin.com" <maxime.chevallier@...tlin.com>,
"roopa@...dia.com" <roopa@...dia.com>
Subject: Re: Basic PCP/DEI-based queue classification
Vladimir Oltean <vladimir.oltean@....com> writes:
> The problem with the ingress-qos-map and egress-qos-map from 802.1Q that
> I see is that they allow for per-VID prioritization, which is way more
> fine grained than what we need. This, plus the fact that bridge VLANs
> don't have this setting, only termination (8021q) VLANs do.
>
> How about an ingress-qos-map and an egress-qos-map per port rather
> than per VID, potentially even a bridge_slave netlink attribute,
> offloadable through switchdev? We could make the bridge input fast
> path alter skb->priority for the VLAN-tagged code paths, and this
> could give us superior semantics compared to putting this
> non-standardized knob in the hardware only dcbnl.
Per-netdevice qos map is exactly what we are looking for. I think it
wasn't even considered because the layering is obviously wrong. Stuff
like this really belongs to TC.
Having them on bridge_slave would IMHO not solve much, besides the
layering violations :) If you use anything besides vlan_filtering
bridges, you have X places to configure compatibly. Which is not great
for either offload or configuration.
Given there's one piece of HW to actually do the prioritization, it
seems obvious to aim at having a single source of the mapping in Linux.
DCB or TC both fit the bill here.
Maybe we need to figure out how to tweak TC to make this stuff easier to
configure and offload...
Powered by blists - more mailing lists