[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1462913455.16365.12.camel@redhat.com>
Date: Tue, 10 May 2016 16:50:56 -0400
From: Rik van Riel <riel@...hat.com>
To: David Miller <davem@...emloft.net>, pabeni@...hat.com
Cc: eric.dumazet@...il.com, netdev@...r.kernel.org,
edumazet@...gle.com, jiri@...lanox.com, daniel@...earbox.net,
ast@...mgrid.com, aduyck@...antis.com, tom@...bertland.com,
peterz@...radead.org, mingo@...nel.org, hannes@...essinduktion.org,
linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH 0/2] net: threadable napi poll loop
On Tue, 2016-05-10 at 16:45 -0400, David Miller wrote:
> From: Paolo Abeni <pabeni@...hat.com>
> Date: Tue, 10 May 2016 22:22:50 +0200
>
> > On Tue, 2016-05-10 at 09:08 -0700, Eric Dumazet wrote:
> >> On Tue, 2016-05-10 at 18:03 +0200, Paolo Abeni wrote:
> >>
> >> > If a single core host is under network flood, i.e. ksoftirqd is
> >> > scheduled and it eventually (after processing ~640 packets) will
> let the
> >> > user space process run. The latter will execute a syscall to
> receive a
> >> > packet, which will have to disable/enable bh at least once and
> that will
> >> > cause the processing of another ~640 packets. To receive a
> single packet
> >> > in user space, the kernel has to process more than one thousand
> packets.
> >>
> >> Looks you found the bug then. Have you tried to fix it ?
> ...
> > The ksoftirq and the local_bh_enable() design are the root of the
> > problem, they need to be touched/affected to solve it.
>
> That's not what I read from your description, processing 640 packets
> before going to ksoftirqd seems to the be the absolute root problem.
What would a fix for that look like?
Keep track of the number of processed incoming packets,
and the number of packets handed off, and defer to
ksoftirqd earlier if the statistics suggest packets are
getting dropped on the floor?
Is there a cheap way to do that kind of thing?
--
All Rights Reversed.
Download attachment "signature.asc" of type "application/pgp-signature" (474 bytes)
Powered by blists - more mailing lists