lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230120141537.1350744-12-vladimir.oltean@nxp.com>
Date:   Fri, 20 Jan 2023 16:15:37 +0200
From:   Vladimir Oltean <vladimir.oltean@....com>
To:     netdev@...r.kernel.org, John Fastabend <john.fastabend@...il.com>
Cc:     "David S. Miller" <davem@...emloft.net>,
        Eric Dumazet <edumazet@...gle.com>,
        Jakub Kicinski <kuba@...nel.org>,
        Paolo Abeni <pabeni@...hat.com>,
        Claudiu Manoil <claudiu.manoil@....com>,
        Camelia Groza <camelia.groza@....com>,
        Xiaoliang Yang <xiaoliang.yang_1@....com>,
        Gerhard Engleder <gerhard@...leder-embedded.com>,
        Vinicius Costa Gomes <vinicius.gomes@...el.com>,
        Alexander Duyck <alexander.duyck@...il.com>,
        Kurt Kanzenbach <kurt@...utronix.de>,
        Ferenc Fejes <ferenc.fejes@...csson.com>,
        Tony Nguyen <anthony.l.nguyen@...el.com>,
        Jesse Brandeburg <jesse.brandeburg@...el.com>,
        Jacob Keller <jacob.e.keller@...el.com>
Subject: [RFC PATCH net-next 11/11] net/sched: taprio: only calculate gate mask per TXQ for igc

Vinicius has repeated a couple of times in our discussion that it was a
mistake for the taprio UAPI to take as input the Qbv gate mask per TC
rather than per TXQ. In the Frame Preemption RFC thread:
https://patchwork.kernel.org/project/netdevbpf/patch/20220816222920.1952936-3-vladimir.oltean@nxp.com/#25011225

I had this unanswered question:

| > And even that it works out because taprio "translates" from traffic
| > classes to queues when it sends the offload information to the driver,
| > i.e. the driver knows the schedule of queues, not traffic classes.
|
| Which is incredibly strange to me, since the standard clearly defines
| Qbv gates to be per traffic class, and in ENETC, even if we have 2 TX
| queues for the same traffic class (one per CPU), the hardware schedule
| is still per traffic class and not per independent TX queue (BD ring).
|
| How does this work for i225/i226, if 2 queues are configured for the
| same dequeue priority? Do the taprio gates still take effect per queue?

I haven't gotten an answer, and some things are still unclear, but I
suspect that igc is the outlier, and all the other hardware actually has
the gate mask per TC and not per TXQ, just like the standard says.

For example, in ENETC up until now, we weren't passed the mqprio queue
configuration via struct tc_taprio_qopt_offload, and hence, we needed to
assume that the TC:TXQ mapping was 1:1. So "per TC" or "per TXQ" did not
make a practical difference. I suspect that other drivers are in the
same position.

Benefit from the TC_QUERY_CAPS feature that Jakub suggested we add, and
query the device driver before calling the proper ndo_setup_tc(), and
figure out if it expects the gate mask to be per TC or per TXQ.

Signed-off-by: Vladimir Oltean <vladimir.oltean@....com>
---
 drivers/net/ethernet/intel/igc/igc_main.c | 17 +++++++++++++++++
 include/net/pkt_sched.h                   |  1 +
 net/sched/sch_taprio.c                    | 11 ++++++++---
 3 files changed, 26 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c
index e86b15efaeb8..9b6f2aaf78c2 100644
--- a/drivers/net/ethernet/intel/igc/igc_main.c
+++ b/drivers/net/ethernet/intel/igc/igc_main.c
@@ -6205,12 +6205,29 @@ static int igc_tsn_enable_cbs(struct igc_adapter *adapter,
 	return igc_tsn_offload_apply(adapter);
 }
 
+static int igc_tc_query_caps(struct tc_query_caps_base *base)
+{
+	switch (base->type) {
+	case TC_SETUP_QDISC_TAPRIO: {
+		struct tc_taprio_caps *caps = base->caps;
+
+		caps->gate_mask_per_txq = true;
+
+		return 0;
+	}
+	default:
+		return -EOPNOTSUPP;
+	}
+}
+
 static int igc_setup_tc(struct net_device *dev, enum tc_setup_type type,
 			void *type_data)
 {
 	struct igc_adapter *adapter = netdev_priv(dev);
 
 	switch (type) {
+	case TC_QUERY_CAPS:
+		return igc_tc_query_caps(type_data);
 	case TC_SETUP_QDISC_TAPRIO:
 		return igc_tsn_enable_qbv_scheduling(adapter, type_data);
 
diff --git a/include/net/pkt_sched.h b/include/net/pkt_sched.h
index ace8be520fb0..fd889fc4912b 100644
--- a/include/net/pkt_sched.h
+++ b/include/net/pkt_sched.h
@@ -176,6 +176,7 @@ struct tc_mqprio_qopt_offload {
 
 struct tc_taprio_caps {
 	bool supports_queue_max_sdu:1;
+	bool gate_mask_per_txq:1;
 };
 
 struct tc_taprio_sched_entry {
diff --git a/net/sched/sch_taprio.c b/net/sched/sch_taprio.c
index a3fa5debe513..58efa982db65 100644
--- a/net/sched/sch_taprio.c
+++ b/net/sched/sch_taprio.c
@@ -1212,7 +1212,8 @@ static u32 tc_map_to_queue_mask(struct net_device *dev, u32 tc_mask)
 
 static void taprio_sched_to_offload(struct net_device *dev,
 				    struct sched_gate_list *sched,
-				    struct tc_taprio_qopt_offload *offload)
+				    struct tc_taprio_qopt_offload *offload,
+				    bool gate_mask_per_txq)
 {
 	struct sched_entry *entry;
 	int i = 0;
@@ -1226,7 +1227,11 @@ static void taprio_sched_to_offload(struct net_device *dev,
 
 		e->command = entry->command;
 		e->interval = entry->interval;
-		e->gate_mask = tc_map_to_queue_mask(dev, entry->gate_mask);
+		if (gate_mask_per_txq)
+			e->gate_mask = tc_map_to_queue_mask(dev,
+							    entry->gate_mask);
+		else
+			e->gate_mask = entry->gate_mask;
 
 		i++;
 	}
@@ -1273,7 +1278,7 @@ static int taprio_enable_offload(struct net_device *dev,
 	offload->enable = 1;
 	if (mqprio)
 		offload->mqprio.qopt = *mqprio;
-	taprio_sched_to_offload(dev, sched, offload);
+	taprio_sched_to_offload(dev, sched, offload, caps.gate_mask_per_txq);
 
 	for (tc = 0; tc < TC_MAX_QUEUE; tc++)
 		offload->max_sdu[tc] = q->max_sdu[tc];
-- 
2.34.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ