[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20211007175000.2334713-2-bigeasy@linutronix.de>
Date: Thu, 7 Oct 2021 19:49:57 +0200
From: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
To: netdev@...r.kernel.org
Cc: "David S. Miller" <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>,
Jamal Hadi Salim <jhs@...atatu.com>,
Cong Wang <xiyou.wangcong@...il.com>,
Jiri Pirko <jiri@...nulli.us>,
Thomas Gleixner <tglx@...utronix.de>,
"Ahmed S. Darwish" <a.darwish@...utronix.de>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
John Fastabend <john.fastabend@...il.com>
Subject: [PATCH net 1/4] mqprio: Correct stats in mqprio_dump_class_stats().
It looks like with the introduction of subqueus the statics broke.
Before the change `bstats' and `qstats' on stack was fed and later this
was copied over to struct gnet_dump.
After the change the `bstats' and `qstats' are only set to 0 and no
longer updated and that is then fed to gnet_dump. Additionally
qdisc->cpu_bstats and qdisc->cpu_qstats is destroeyd for global
stats. For per-CPU stats both __gnet_stats_copy_basic() and
__gnet_stats_copy_queue() add the values but for global stats the value
set and so the previous value is lost and only the last value from the
loop ends up in sch->[bq]stats.
Use the on-stack [bq]stats variables again and add the stats manually in
the global case.
Fixes: ce679e8df7ed2 ("net: sched: add support for TCQ_F_NOLOCK subqueues to sch_mqprio")
Cc: John Fastabend <john.fastabend@...il.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
---
net/sched/sch_mqprio.c | 32 +++++++++++++++++++-------------
1 file changed, 19 insertions(+), 13 deletions(-)
diff --git a/net/sched/sch_mqprio.c b/net/sched/sch_mqprio.c
index 8766ab5b87880..5eb3b1b7ae5e7 100644
--- a/net/sched/sch_mqprio.c
+++ b/net/sched/sch_mqprio.c
@@ -529,22 +529,28 @@ static int mqprio_dump_class_stats(struct Qdisc *sch, unsigned long cl,
for (i = tc.offset; i < tc.offset + tc.count; i++) {
struct netdev_queue *q = netdev_get_tx_queue(dev, i);
struct Qdisc *qdisc = rtnl_dereference(q->qdisc);
- struct gnet_stats_basic_cpu __percpu *cpu_bstats = NULL;
- struct gnet_stats_queue __percpu *cpu_qstats = NULL;
spin_lock_bh(qdisc_lock(qdisc));
- if (qdisc_is_percpu_stats(qdisc)) {
- cpu_bstats = qdisc->cpu_bstats;
- cpu_qstats = qdisc->cpu_qstats;
- }
- qlen = qdisc_qlen_sum(qdisc);
- __gnet_stats_copy_basic(NULL, &sch->bstats,
- cpu_bstats, &qdisc->bstats);
- __gnet_stats_copy_queue(&sch->qstats,
- cpu_qstats,
- &qdisc->qstats,
- qlen);
+ if (qdisc_is_percpu_stats(qdisc)) {
+ qlen = qdisc_qlen_sum(qdisc);
+
+ __gnet_stats_copy_basic(NULL, &bstats,
+ qdisc->cpu_bstats,
+ &qdisc->bstats);
+ __gnet_stats_copy_queue(&qstats,
+ qdisc->cpu_qstats,
+ &qdisc->qstats,
+ qlen);
+ } else {
+ qlen += qdisc->q.qlen;
+ bstats.bytes += qdisc->bstats.bytes;
+ bstats.packets += qdisc->bstats.packets;
+ qstats.backlog += qdisc->qstats.backlog;
+ qstats.drops += qdisc->qstats.drops;
+ qstats.requeues += qdisc->qstats.requeues;
+ qstats.overlimits += qdisc->qstats.overlimits;
+ }
spin_unlock_bh(qdisc_lock(qdisc));
}
--
2.33.0
Powered by blists - more mailing lists