[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAM_iQpXZMeAGkq_=rG6KEabFNykszpRU_Hnv65Qk7yesvbRDrw@mail.gmail.com>
Date: Thu, 3 Sep 2020 10:43:38 -0700
From: Cong Wang <xiyou.wangcong@...il.com>
To: Paolo Abeni <pabeni@...hat.com>
Cc: Kehuan Feng <kehuan.feng@...il.com>,
Hillf Danton <hdanton@...a.com>,
Jike Song <albcamus@...il.com>, Josh Hunt <johunt@...mai.com>,
Jonas Bonn <jonas.bonn@...rounds.com>,
Michael Zhivich <mzhivich@...mai.com>,
David Miller <davem@...emloft.net>,
John Fastabend <john.fastabend@...il.com>,
LKML <linux-kernel@...r.kernel.org>,
Netdev <netdev@...r.kernel.org>
Subject: Re: Packet gets stuck in NOLOCK pfifo_fast qdisc
On Thu, Sep 3, 2020 at 1:40 AM Paolo Abeni <pabeni@...hat.com> wrote:
>
> On Wed, 2020-09-02 at 22:01 -0700, Cong Wang wrote:
> > Can you test the attached one-line fix? I think we are overthinking,
> > probably all
> > we need here is a busy wait.
>
> I think that will solve, but I also think that will kill NOLOCK
> performances due to really increased contention.
Yeah, we somehow end up with more locks (seqlock, skb array lock)
for lockless qdisc. What an irony... ;)
>
> At this point I fear we could consider reverting the NOLOCK stuff.
> I personally would hate doing so, but it looks like NOLOCK benefits are
> outweighed by its issues.
I agree, NOLOCK brings more pains than gains. There are many race
conditions hidden in generic qdisc layer, another one is enqueue vs.
reset which is being discussed in another thread.
Thanks.
Powered by blists - more mailing lists