[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <20201007210744.8546-1-vijayendra.suman@oracle.com>
Date: Wed, 7 Oct 2020 14:07:44 -0700
From: Vijayendra Suman <vijayendra.suman@...cle.com>
To: pabeni@...hat.com
Cc: a.fatoum@...gutronix.de, kernel@...gutronix.de,
linux-can@...r.kernel.org, netdev@...r.kernel.org,
somasundaram.krishnasamy@...cle.com,
ramanan.govindarajan@...cle.com,
Vijayendra Suman <vijayendra.suman@...cle.com>
Subject: Re: [BUG] pfifo_fast may cause out-of-order CAN frame transmission
[PATCH] Patch with Network Performance Improvment qperf:tcp_lat
Check Performed for __QDISC_STATE_DEACTIVATED before checking BYPASS flag
qperf tcp_lat 65536bytes over an ib_switch
For 64K packet Performance improvment is around 47 % and performance deviation
is reduced to 5 % which was 27 % prior to this patch.
As mentioned by Paolo, With "net: dev: introduce support for sch BYPASS for lockless qdisc" commit
there may be out of order packet issue.
Is there any update to solve out of order packet issue.
qperf Counters for tcp_lat for 60 sec and packet size 64k
With Below Patch
1. 53817
2. 54100
3. 57016
4. 59410
5. 62017
6. 54625
7. 55770
8. 54015
9. 54406
10. 53137
Without Patch [Upstream Stream]
1. 83742
2. 107320
3. 82807
4. 105384
5. 77406
6. 132665
7. 117566
8. 109279
9. 94959
10. 82331
11. 91614
12. 104701
13. 91123
14. 93908
15. 200485
With UnRevert of commit 379349e9bc3b42b8b2f8f7a03f64a97623fff323
[Revert "net: dev: introduce support for sch BYPASS for lockless qdisc"]
1. 65550
2. 64285
3. 64110
4. 64300
5. 64645
6. 63928
7. 63574
8. 65024
9. 65153
10. 64281
Signed-off-by: Vijayendra Suman <vijayendra.suman@...cle.com>
---
net/core/dev.c | 27 ++++++++++-----------------
1 file changed, 10 insertions(+), 17 deletions(-)
diff --git a/net/core/dev.c b/net/core/dev.c
index 40bbb5e43f5d..6cc8e0209b20 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -3384,35 +3384,27 @@ static inline int __dev_xmit_skb(struct sk_buff *skb, struct Qdisc *q,
struct net_device *dev,
struct netdev_queue *txq)
{
struct sk_buff *to_free = NULL;
bool contended;
- int rc;
+ int rc = NET_XMIT_SUCCESS;
qdisc_calculate_pkt_len(skb, q);
if (q->flags & TCQ_F_NOLOCK) {
- if ((q->flags & TCQ_F_CAN_BYPASS) && READ_ONCE(q->empty) &&
- qdisc_run_begin(q)) {
- if (unlikely(test_bit(__QDISC_STATE_DEACTIVATED,
- &q->state))) {
- __qdisc_drop(skb, &to_free);
- rc = NET_XMIT_DROP;
- goto end_run;
- }
- qdisc_bstats_cpu_update(q, skb);
-
- rc = NET_XMIT_SUCCESS;
+ if (unlikely(test_bit(__QDISC_STATE_DEACTIVATED, &q->state))) {
+ __qdisc_drop(skb, &to_free);
+ rc = NET_XMIT_DROP;
+ } else if ((q->flags & TCQ_F_CAN_BYPASS) && READ_ONCE(q->empty) &&
+ qdisc_run_begin(q)) {
+ qdisc_bstats_update(q, skb);
if (sch_direct_xmit(skb, q, dev, txq, NULL, true))
__qdisc_run(q);
-
-end_run:
qdisc_run_end(q);
} else {
rc = q->enqueue(skb, q, &to_free) & NET_XMIT_MASK;
qdisc_run(q);
}
-
if (unlikely(to_free))
kfree_skb_list(to_free);
return rc;
--
2.27.0
Powered by blists - more mailing lists