[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160714062019.8270.432.stgit@john-Precision-Tower-5810>
Date: Wed, 13 Jul 2016 23:20:20 -0700
From: John Fastabend <john.fastabend@...il.com>
To: fw@...len.de, jhs@...atatu.com, alexei.starovoitov@...il.com,
eric.dumazet@...il.com, brouer@...hat.com
Cc: netdev@...r.kernel.org
Subject: [RFC PATCH v2 02/10] net: sched: qdisc_qlen for per cpu logic
This is a bit interesting because it means sch_direct_xmit will
return a positive value which causes the dequeue/xmit cycle to
continue only when a specific cpu has a qlen > 0.
However checking each cpu for qlen will break performance so
its important to note that qdiscs that set the no lock bit need
to have some sort of per cpu enqueue/dequeue data structure that
maps to the per cpu qlen value.
Signed-off-by: John Fastabend <john.r.fastabend@...el.com>
---
include/net/sch_generic.h | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
index 3de6a8c..354951d 100644
--- a/include/net/sch_generic.h
+++ b/include/net/sch_generic.h
@@ -247,8 +247,16 @@ static inline void qdisc_cb_private_validate(const struct sk_buff *skb, int sz)
BUILD_BUG_ON(sizeof(qcb->data) < sz);
}
+static inline int qdisc_qlen_cpu(const struct Qdisc *q)
+{
+ return this_cpu_ptr(q->cpu_qstats)->qlen;
+}
+
static inline int qdisc_qlen(const struct Qdisc *q)
{
+ if (q->flags & TCQ_F_NOLOCK)
+ return qdisc_qlen_cpu(q);
+
return q->q.qlen;
}
Powered by blists - more mailing lists