[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241217115448.tyophzmiudpxuxbz@skbuf>
Date: Tue, 17 Dec 2024 13:54:48 +0200
From: Vladimir Oltean <olteanv@...il.com>
To: Oleksij Rempel <o.rempel@...gutronix.de>
Cc: Andrew Lunn <andrew@...n.ch>, Lorenzo Bianconi <lorenzo@...nel.org>,
Oleksij Rempel <linux@...pel-privat.de>, netdev@...r.kernel.org,
davem@...emloft.net, edumazet@...gle.com, kuba@...nel.org,
pabeni@...hat.com, horms@...nel.org, nbd@....name,
sean.wang@...iatek.com, Mark-MC.Lee@...iatek.com,
lorenzo.bianconi83@...il.com
Subject: Re: [RFC net-next 0/5] Add ETS and TBF Qdisc offload for Airoha
EN7581 SoC
On Tue, Dec 17, 2024 at 10:38:21AM +0100, Oleksij Rempel wrote:
> Hi,
>
> You are absolutely correct that offloading should accelerate what Linux already
> supports in software, and we need to respect this model. However, I’d like to
> step back for a moment to clarify the underlying problem before focusing too
> much on solutions.
>
> ### The Core Problem: Flow Control Limitations
>
> 1. **QoS and Flow Control:**
>
> At the heart of proper QoS implementation lies flow control. Flow control
> mechanisms exist at various levels:
>
> - MAC-level signaling (e.g., pause frames)
>
> - Queue management (e.g., stopping queues when the hardware is congested)
>
> The typical Linux driver uses flow control signaling from the MAC (e.g.,
> stopping queues) to coordinate traffic, and depending on the Qdisc, this
> flow control can propagate up to user space applications.
I read this section as "The Core Problem: Ethernet".
> 2. **Challenges with DSA:**
> In DSA, we lose direct **flow control communication** between:
>
> - The host MAC
>
> - The MAC of a DSA user port.
>
> While internal flow control within the switch may still work, it does not
> extend to the host. Specifically:
>
> - Pause frames often affect **all priorities** and are not granular enough
> for low-latency applications.
>
> - The signaling from the MAC of the DSA user port to the host is either
> **not supported** or is **disabled** (often through device tree
> configuration).
And this as: "Challenges with DSA: uses Ethernet". I think we can all
agree that standard Ethernet, with all the flexibility it gives to pair
any discrete DSA switch to any host NIC, does not give us sufficient
instruments for independent flow control of each user port.
Food for thought: strongly coupled MAC + integrated DSA switch systems,
like for example Broadcom, have custom methods of pacing transmission to
user ports by selectively stopping conduit TX queues associated with
those user ports on congestion:
https://lore.kernel.org/netdev/7510c29a-b60f-e0d7-4129-cb90fe376c74@gmail.com/
> ### Why This Matters for QoS
>
> For traffic flowing **from the host** to DSA user ports:
>
> - Without proper flow control, congestion cannot be communicated back to the
> host, leading to buffer overruns and degraded QoS.
There are multiple, and sometimes conflicting, goals to QoS and strategies on
congestion. Generally speaking, it is good to clarify that deterministic latency,
high throughput and zero loss cannot be all achieved at the same time. It is
also good to highlight the fact that you are focusing on zero loss and that
this is not necessarily the full picture. Some AVB/TSN switches, like SJA1105,
do not support pause frames at all, not even on user ports, because as you say,
it's like the nuclear solution which stops the entire port regardless of
packet priorities. And even if they did support it, for deterministic latency
applications it is best to turn it off. If you make a port enter congestion by
bombarding it with TC0 traffic, you'll incur latency to TC7 traffic until you
exit the congestion condition. These switches just expect to have reservations
very carefully configured by the system administrator. What exceeds reservations
and cannot consume shared resources (because they are temporarily depleted) is dropped.
> - To address this, we need to compensate for the lack of flow control signaling
> by applying traffic limits (or shaping).
A splendid idea in theory. In practice, the traffic rate at the egress
of a user port is the sum of locally injected traffic plus autonomously
forwarded traffic. The port can enter congestion even with shaping of
CPU-injected traffic at a certain rate.
Conduit
|
v
+-------------------------+
| CPU port |
| | |
| +--------+ |
| | |
| +<---+ |
| | | |
| v | |
| lan0 lan1 lan2 lan3 |
+-------------------------+
|
v Just 1Gbps.
You _could_ apply this technique to achieve a different purpose than
net zero packet loss: selective transmission guarantees for CPU-injected
traffic. But you also need to ensure that injected packets have a higher
strict priority than the rest, and that the switch resources are
configured through devlink-sb to have enough reserved space to keep
these high priority packets on congestion and drop something else instead.
It's a tool to have for sure, but you need to be extremely specific and
realistic about your goals.
> ### Approach: Applying Limits on the Conduit Interface
>
> One way to solve this is by applying traffic shaping or limits directly on the
> **conduit MAC**. However, this approach has significant complexity:
>
> 1. **Hardware-Specific Details:**
>
> We would need deep hardware knowledge to set up traffic filters or disectors
> at the conduit level. This includes:
>
> - Parsing **CPU tags** specific to the switch in use.
>
> - Applying port-specific rules, some of which depend on **user port link
> speed**.
>
> 2. **Admin Burden:**
>
> Forcing network administrators to configure conduit-specific filters
> manually increases complexity and goes against the existing DSA abstractions,
> which are already well-integrated into the kernel.
Agree that there is high complexity. Just need to see a proposal which
acknowledges that it's not for nothing.
> ### How Things Can Be Implemented
>
> To address QoS for host-to-user port traffic in DSA, I see two possible
> approaches:
>
> #### 1. Apply Rules on the Conduit Port (Using `dst_port`)
>
> In this approach, rules are applied to the **conduit interface**, and specific
> user ports are matched using **port indices**.
>
> # Conduit interface
> tc qdisc add dev conduit0 clsact
>
> # Match traffic for user port 1 (e.g., lan0)
> tc filter add dev conduit0 egress flower dst_port 1 \
> action police rate 50mbit burst 5k drop
>
> # Match traffic for user port 2 (e.g., lan1)
> tc filter add dev conduit0 egress flower dst_port 2 \
> action police rate 30mbit burst 3k drop
Ok, so you propose an abstract key set for DSA in the flower classifier
with mappings to concrete packet fields happening in the backend,
probably done by the tagging protocol in use. The abstract key set
represents the superset of all known DSA fields, united by a common
interpretation, and each tagger rejects keys it cannot map to the
physical DSA tag.
I can immediately think of a challenge here, that we can dynamically
change the tagging protocol while tc rules are present, and this can
affect which flower keys can be mapped and which cannot. For example,
the ocelot tagging protocol could map a virtual DSA key "TX timestamp
type" to the REW_OP field, but the ocelot-8021q tagger cannot. Plus, you
could add tc filters to a block shared by multiple devices. You can't
always infer the physical tagging protocol from the device that the
filters are attached to.
> #### 2. Apply Rules Directly on the User Ports (With Conduit Marker)
>
> In this approach, rules are applied **directly to the user-facing DSA ports**
> (e.g., `lan0`, `lan1`) with a **conduit-specific marker**. The kernel resolves
> the mapping internally.
>
> # Apply rules with conduit marker for user ports
> tc qdisc add dev lan0 root tbf rate 50mbit burst 5k conduit-only
> tc qdisc add dev lan1 root tbf rate 30mbit burst 3k conduit-only
>
> Here:
> - **`conduit-only`**: A marker (flag) indicating that the rule applies
> specifically to **host-to-port traffic** and not to L2-forwarded traffic within
> the switch.
>
> ### Recommendation
>
> The second approach (**user port-based with `conduit-only` marker**) is cleaner
> and more intuitive. It avoids exposing hardware details like port indices while
> letting the kernel handle conduit-specific behavior transparently.
>
> Best regards,
> Oleksij
The second approach that you recommend suffers from the same problem as Lorenzo's
revised proposal, which is that it treats the conduit interface as a collection of
independent pipes of infinite capacity to each user port, with no arbitration concerns
of its own. The model is again great in theory, but maps really poorly on real life.
Your proposal actively encourages users to look away from the scheduling algorithm
of the conduit, and just look at user ports in isolation of each other. I strongly
disagree with it.
Powered by blists - more mailing lists