[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <22dbfd8bc2c200e648bf69ac8557d0ee1406ac2d.camel@redhat.com>
Date: Fri, 22 Mar 2019 14:35:17 +0100
From: Paolo Abeni <pabeni@...hat.com>
To: Eric Dumazet <edumazet@...gle.com>
Cc: Christoph Paasch <christoph.paasch@...il.com>,
Alexander Duyck <alexander.duyck@...il.com>,
netdev <netdev@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>,
"Samudrala, Sridhar" <sridhar.samudrala@...el.com>,
David Miller <davem@...emloft.net>,
Linux API <linux-api@...r.kernel.org>
Subject: Re: [net-next PATCH v3 4/8] net: Change return type of sk_busy_loop
from bool to void
On Fri, 2019-03-22 at 05:59 -0700, Eric Dumazet wrote:
> On Fri, Mar 22, 2019 at 3:33 AM Paolo Abeni <pabeni@...hat.com> wrote:
> > Hi,
> >
> > On Thu, 2019-03-21 at 23:05 -0400, Christoph Paasch wrote:
> > > On Thu, Mar 21, 2019 at 12:43 PM Alexander Duyck
> > > <alexander.duyck@...il.com> wrote:
> > > > On Thu, Mar 21, 2019 at 2:45 AM Paolo Abeni <pabeni@...hat.com> wrote:
> > > > > The following - completely untested - should avoid the unbounded loop,
> > > > > but it's not a complete fix, I *think* we should also change
> > > > > sk_busy_loop_end() in a similar way, but that is a little more complex
> > > > > due to the additional indirections.
> > > >
> > > > As far as sk_busy_loop_end we could look at just forking sk_busy_loop
> > > > and writing a separate implementation for datagram sockets that uses a
> > > > different loop_end function. It shouldn't take much to change since
> > > > all we would need to do is pass a structure containing the sk and last
> > > > pointers instead of just passing the sk directly as the loop_end
> > > > argument.
> > > >
> > > > > Could you please test it?
> > > > >
> > > > > Any feedback welcome!
> > > >
> > > > The change below looks good to me.
> > >
> > > I just tried it out. Worked for me!
> > >
> > > You can add my Tested-by if you do a formal patch-submission:
> > >
> > > Tested-by: Christoph Paasch <cpaasch@...le.com>
> >
> > Thanks for testing!
> >
> > I'm trying to reproduce the issue locally, but I'm unable. I think that
> > the current UDP implementation is not affected, as we always ensure
> > sk_receive_queue is empty before busy polling.
>
> But right after check is done we release the queue lock, so a packet might
> come right after the test has been done.
Yep I was unclear and uncorrect. My point is: with the current UDP
implementation, if we have a non empty sk_receive_queue after the busy
loop, it always means there are more packets to be processed and we
should loop again, as we currently do.
AFAICS, that is different from the reported issue, where the system
stall looping around sk_receive_queue while no new packets are appended
there.
Cheers,
Paol
Powered by blists - more mailing lists