[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160817193416.27032.62729.stgit@john-Precision-Tower-5810>
Date: Wed, 17 Aug 2016 12:34:16 -0700
From: John Fastabend <john.fastabend@...il.com>
To: xiyou.wangcong@...il.com, jhs@...atatu.com,
alexei.starovoitov@...il.com, eric.dumazet@...il.com,
brouer@...hat.com
Cc: john.r.fastabend@...el.com, netdev@...r.kernel.org,
john.fastabend@...il.com, davem@...emloft.net
Subject: [RFC PATCH 02/13] net: sched: qdisc_qlen for per cpu logic
This is a bit interesting because it means sch_direct_xmit will
return a positive value which causes the dequeue/xmit cycle to
continue only when a specific cpu has a qlen > 0.
However checking each cpu for qlen will break performance so
its important to note that qdiscs that set the no lock bit need
to have some sort of per cpu enqueue/dequeue data structure that
maps to the per cpu qlen value.
Signed-off-by: John Fastabend <john.r.fastabend@...el.com>
---
include/net/sch_generic.h | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
index 3de6a8c..354951d 100644
--- a/include/net/sch_generic.h
+++ b/include/net/sch_generic.h
@@ -247,8 +247,16 @@ static inline void qdisc_cb_private_validate(const struct sk_buff *skb, int sz)
BUILD_BUG_ON(sizeof(qcb->data) < sz);
}
+static inline int qdisc_qlen_cpu(const struct Qdisc *q)
+{
+ return this_cpu_ptr(q->cpu_qstats)->qlen;
+}
+
static inline int qdisc_qlen(const struct Qdisc *q)
{
+ if (q->flags & TCQ_F_NOLOCK)
+ return qdisc_qlen_cpu(q);
+
return q->q.qlen;
}
Powered by blists - more mailing lists