[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c97908eb-5a0b-363c-93fd-59c037bbd9f0@huawei.com>
Date: Mon, 14 Sep 2020 10:10:31 +0800
From: Yunsheng Lin <linyunsheng@...wei.com>
To: Cong Wang <xiyou.wangcong@...il.com>,
Kehuan Feng <kehuan.feng@...il.com>
CC: Hillf Danton <hdanton@...a.com>, Paolo Abeni <pabeni@...hat.com>,
"Jike Song" <albcamus@...il.com>, Josh Hunt <johunt@...mai.com>,
Jonas Bonn <jonas.bonn@...rounds.com>,
Michael Zhivich <mzhivich@...mai.com>,
"David Miller" <davem@...emloft.net>,
John Fastabend <john.fastabend@...il.com>,
LKML <linux-kernel@...r.kernel.org>,
Netdev <netdev@...r.kernel.org>,
"linuxarm@...wei.com" <linuxarm@...wei.com>
Subject: Re: Packet gets stuck in NOLOCK pfifo_fast qdisc
On 2020/9/11 4:19, Cong Wang wrote:
> On Thu, Sep 3, 2020 at 8:21 PM Kehuan Feng <kehuan.feng@...il.com> wrote:
>> I also tried Cong's patch (shown below on my tree) and it could avoid
>> the issue (stressing for 30 minutus for three times and not jitter
>> observed).
>
> Thanks for verifying it!
>
>>
>> --- ./include/net/sch_generic.h.orig 2020-08-21 15:13:51.787952710 +0800
>> +++ ./include/net/sch_generic.h 2020-09-03 21:36:11.468383738 +0800
>> @@ -127,8 +127,7 @@
>> static inline bool qdisc_run_begin(struct Qdisc *qdisc)
>> {
>> if (qdisc->flags & TCQ_F_NOLOCK) {
>> - if (!spin_trylock(&qdisc->seqlock))
>> - return false;
>> + spin_lock(&qdisc->seqlock);
>> } else if (qdisc_is_running(qdisc)) {
>> return false;
>> }
>>
>> I am not actually know what you are discussing above. It seems to me
>> that Cong's patch is similar as disabling lockless feature.
>
>>>From performance's perspective, yeah. Did you see any performance
> downgrade with my patch applied? It would be great if you can compare
> it with removing NOLOCK. And if the performance is as bad as no
> NOLOCK, then we can remove the NOLOCK bit for pfifo_fast, at least
> for now.
It seems the lockless qdisc may have below concurrent problem:
cpu0: cpu1:
q->enqueue .
qdisc_run_begin(q) .
__qdisc_run(q) ->qdisc_restart() -> dequeue_skb() .
-> sch_direct_xmit() .
.
q->enqueue
qdisc_run_begin(q)
qdisc_run_end(q)
cpu1 enqueue a skb without calling __qdisc_run(), and cpu0 did not see the
enqueued skb when calling __qdisc_run(q) because cpu1 may enqueue the skb
after cpu0 called __qdisc_run(q) and before cpu0 called qdisc_run_end(q).
Kehuan, do you care to try the below patch if it is the same problem?
diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
index d60e7c3..c97c1ed 100644
--- a/include/net/sch_generic.h
+++ b/include/net/sch_generic.h
@@ -36,6 +36,7 @@ struct qdisc_rate_table {
enum qdisc_state_t {
__QDISC_STATE_SCHED,
__QDISC_STATE_DEACTIVATED,
+ __QDISC_STATE_ENQUEUED,
};
struct qdisc_size_table {
diff --git a/net/core/dev.c b/net/core/dev.c
index 0362419..5985648 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -3748,6 +3748,8 @@ static inline int __dev_xmit_skb(struct sk_buff *skb, struct Qdisc *q,
qdisc_calculate_pkt_len(skb, q);
if (q->flags & TCQ_F_NOLOCK) {
+ set_bit(__QDISC_STATE_ENQUEUED, &q->state);
+ smp_mb__after_atomic();
rc = q->enqueue(skb, q, &to_free) & NET_XMIT_MASK;
qdisc_run(q);
diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
index 265a61d..c389641 100644
--- a/net/sched/sch_generic.c
+++ b/net/sched/sch_generic.c
@@ -381,6 +381,8 @@ void __qdisc_run(struct Qdisc *q)
int quota = dev_tx_weight;
int packets;
+ clear_bit(__QDISC_STATE_ENQUEUED, &q->state);
+ smp_mb__after_atomic();
while (qdisc_restart(q, &packets)) {
quota -= packets;
if (quota <= 0) {
@@ -388,6 +390,9 @@ void __qdisc_run(struct Qdisc *q)
break;
}
}
+
+ if (test_bit(__QDISC_STATE_ENQUEUED, &q->state))
+ __netif_schedule(q);
}
unsigned long dev_trans_start(struct net_device *dev)
>
> Thanks.
>
Powered by blists - more mailing lists