[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9DDC4F99-3133-47F7-A107-E9E5F9E1ADD0@akamai.com>
Date: Wed, 8 Jul 2020 20:33:39 +0000
From: "Zhivich, Michael" <mzhivich@...mai.com>
To: "Hunt, Joshua" <johunt@...mai.com>,
Paolo Abeni <pabeni@...hat.com>,
Jonas Bonn <jonas.bonn@...rounds.com>,
Cong Wang <xiyou.wangcong@...il.com>
CC: David Miller <davem@...emloft.net>,
John Fastabend <john.fastabend@...il.com>,
LKML <linux-kernel@...r.kernel.org>,
"Linux Kernel Network Developers" <netdev@...r.kernel.org>
Subject: Re: Packet gets stuck in NOLOCK pfifo_fast qdisc
On 7/2/20, 2:08 PM, "Josh Hunt" <johunt@...mai.com> wrote:
>
> On 7/2/20 2:45 AM, Paolo Abeni wrote:
> > Hi all,
> >
> > On Thu, 2020-07-02 at 08:14 +0200, Jonas Bonn wrote:
> >> Hi Cong,
> >>
> >> On 01/07/2020 21:58, Cong Wang wrote:
> >>> On Wed, Jul 1, 2020 at 9:05 AM Cong Wang <xiyou.wangcong@...il.com> wrote:
> >>>> On Tue, Jun 30, 2020 at 2:08 PM Josh Hunt <johunt@...mai.com> wrote:
> >>>>> Do either of you know if there's been any development on a fix for this
> >>>>> issue? If not we can propose something.
> >>>>
> >>>> If you have a reproducer, I can look into this.
> >>>
> >>> Does the attached patch fix this bug completely?
> >>
> >> It's easier to comment if you inline the patch, but after taking a quick
> >> look it seems too simplistic.
> >>
> >> i) Are you sure you haven't got the return values on qdisc_run reversed?
> >
> > qdisc_run() returns true if it was able to acquire the seq lock. We
> > need to take special action in the opposite case, so Cong's patch LGTM
> > from a functional PoV.
> >
> >> ii) There's a "bypass" path that skips the enqueue/dequeue operation if
> >> the queue is empty; that needs a similar treatment: after releasing
> >> seqlock it needs to ensure that another packet hasn't been enqueued
> >> since it last checked.
> >
> > That has been reverted with
> > commit 379349e9bc3b42b8b2f8f7a03f64a97623fff323
> >
> > ---
> >> diff --git a/net/core/dev.c b/net/core/dev.c
> >> index 90b59fc50dc9..c7e48356132a 100644
> >> --- a/net/core/dev.c
> >> +++ b/net/core/dev.c
> >> @@ -3744,7 +3744,8 @@ static inline int __dev_xmit_skb(struct sk_buff *skb, struct Qdisc *q,
> >>
> >> if (q->flags & TCQ_F_NOLOCK) {
> >> rc = q->enqueue(skb, q, &to_free) & NET_XMIT_MASK;
> >> - qdisc_run(q);
> >> + if (!qdisc_run(q) && rc == NET_XMIT_SUCCESS)
> >> + __netif_schedule(q);
> >
> > I fear the __netif_schedule() call may cause performance regression to
> > the point of making a revert of TCQ_F_NOLOCK preferable. I'll try to
> > collect some data.
>
> Initial results with Cong's patch look promising, so far no stalls. We
> will let it run over the long weekend and report back on Tuesday.
>
> Paolo - I have concerns about possible performance regression with the
> change as well. If you can gather some data that would be great. If
> things look good with our low throughput test over the weekend we can
> also try assessing performance next week.
>
> Josh
After running our reproducer over the long weekend, we've observed several more packets getting stuck.
The behavior is order of magnitude better *with* the patch (that is, only a few packets get stuck),
but the patch does not completely resolve the issue.
I have a nagging suspicion that the same race that we observed between consumer/producer threads can occur with
softirq processing in net_tx_action() as well (as triggered by __netif_schedule()), since both rely on the same semantics of qdisc_run().
Unfortunately, in such a case, we cannot just punt to __netif_schedule() again.
Regards,
~ Michael
Download attachment "smime.p7s" of type "application/pkcs7-signature" (3491 bytes)
Powered by blists - more mailing lists