[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230130173145.475943-13-vladimir.oltean@nxp.com>
Date: Mon, 30 Jan 2023 19:31:42 +0200
From: Vladimir Oltean <vladimir.oltean@....com>
To: netdev@...r.kernel.org
Cc: "David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>,
Claudiu Manoil <claudiu.manoil@....com>,
Vinicius Costa Gomes <vinicius.gomes@...el.com>,
Kurt Kanzenbach <kurt@...utronix.de>,
Jacob Keller <jacob.e.keller@...el.com>,
Jamal Hadi Salim <jhs@...atatu.com>,
Cong Wang <xiyou.wangcong@...il.com>,
Jiri Pirko <jiri@...nulli.us>
Subject: [PATCH v4 net-next 12/15] net/sched: taprio: pass mqprio queue configuration to ndo_setup_tc()
The taprio offload does not currently pass the mqprio queue configuration
down to the offloading device driver. So the driver cannot act upon the
TXQ counts/offsets per TC, or upon the prio->tc map. It was probably
assumed that the driver only wants to offload num_tc (see
TC_MQPRIO_HW_OFFLOAD_TCS), which it can get from netdev_get_num_tc(),
but there's clearly more to the mqprio configuration than that.
To remedy that, we need to actually reconstruct a struct
tc_mqprio_qopt_offload to pass as part of the tc_taprio_qopt_offload.
The problem is that taprio doesn't keep a persistent reference to the
mqprio queue structure in its own struct taprio_sched, instead it just
applies the contents of that to the netdev state (prio:tc map, per-TC
TXQ counts and offsets, num_tc etc). Maybe it's easier to understand
why, when we look at the size of struct tc_mqprio_qopt_offload: 352
bytes on arm64. Keeping such a large structure would throw off the
memory accesses in struct taprio_sched no matter where we put it.
So we prefer to dynamically reconstruct the mqprio offload structure
based on netdev information, rather than saving a copy of it.
Signed-off-by: Vladimir Oltean <vladimir.oltean@....com>
---
v2->v4: none
v1->v2: reconstruct the mqprio queue configuration structure
include/net/pkt_sched.h | 1 +
net/sched/sch_taprio.c | 20 ++++++++++++++++++++
2 files changed, 21 insertions(+)
diff --git a/include/net/pkt_sched.h b/include/net/pkt_sched.h
index 02e3ccfbc7d1..ace8be520fb0 100644
--- a/include/net/pkt_sched.h
+++ b/include/net/pkt_sched.h
@@ -187,6 +187,7 @@ struct tc_taprio_sched_entry {
};
struct tc_taprio_qopt_offload {
+ struct tc_mqprio_qopt_offload mqprio;
u8 enable;
ktime_t base_time;
u64 cycle_time;
diff --git a/net/sched/sch_taprio.c b/net/sched/sch_taprio.c
index c322a61eaeea..f40016275384 100644
--- a/net/sched/sch_taprio.c
+++ b/net/sched/sch_taprio.c
@@ -1225,6 +1225,25 @@ static void taprio_sched_to_offload(struct net_device *dev,
offload->num_entries = i;
}
+static void
+taprio_mqprio_qopt_reconstruct(struct net_device *dev,
+ struct tc_mqprio_qopt_offload *mqprio)
+{
+ struct tc_mqprio_qopt *qopt = &mqprio->qopt;
+ int num_tc = netdev_get_num_tc(dev);
+ int tc, prio;
+
+ qopt->num_tc = num_tc;
+
+ for (prio = 0; prio <= TC_BITMASK; prio++)
+ qopt->prio_tc_map[prio] = netdev_get_prio_tc_map(dev, prio);
+
+ for (tc = 0; tc < num_tc; tc++) {
+ qopt->count[tc] = dev->tc_to_txq[tc].count;
+ qopt->offset[tc] = dev->tc_to_txq[tc].offset;
+ }
+}
+
static int taprio_enable_offload(struct net_device *dev,
struct taprio_sched *q,
struct sched_gate_list *sched,
@@ -1261,6 +1280,7 @@ static int taprio_enable_offload(struct net_device *dev,
return -ENOMEM;
}
offload->enable = 1;
+ taprio_mqprio_qopt_reconstruct(dev, &offload->mqprio);
taprio_sched_to_offload(dev, sched, offload);
for (tc = 0; tc < TC_MAX_QUEUE; tc++)
--
2.34.1
Powered by blists - more mailing lists