[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20160510.164538.1375529074383780155.davem@davemloft.net>
Date: Tue, 10 May 2016 16:45:38 -0400 (EDT)
From: David Miller <davem@...emloft.net>
To: pabeni@...hat.com
Cc: eric.dumazet@...il.com, netdev@...r.kernel.org,
edumazet@...gle.com, jiri@...lanox.com, daniel@...earbox.net,
ast@...mgrid.com, aduyck@...antis.com, tom@...bertland.com,
peterz@...radead.org, mingo@...nel.org, riel@...hat.com,
hannes@...essinduktion.org, linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH 0/2] net: threadable napi poll loop
From: Paolo Abeni <pabeni@...hat.com>
Date: Tue, 10 May 2016 22:22:50 +0200
> On Tue, 2016-05-10 at 09:08 -0700, Eric Dumazet wrote:
>> On Tue, 2016-05-10 at 18:03 +0200, Paolo Abeni wrote:
>>
>> > If a single core host is under network flood, i.e. ksoftirqd is
>> > scheduled and it eventually (after processing ~640 packets) will let the
>> > user space process run. The latter will execute a syscall to receive a
>> > packet, which will have to disable/enable bh at least once and that will
>> > cause the processing of another ~640 packets. To receive a single packet
>> > in user space, the kernel has to process more than one thousand packets.
>>
>> Looks you found the bug then. Have you tried to fix it ?
...
> The ksoftirq and the local_bh_enable() design are the root of the
> problem, they need to be touched/affected to solve it.
That's not what I read from your description, processing 640 packets
before going to ksoftirqd seems to the be the absolute root problem.
Powered by blists - more mailing lists