lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 10 May 2016 16:52:15 -0400 (EDT)
From:	David Miller <davem@...emloft.net>
To:	riel@...hat.com
Cc:	pabeni@...hat.com, eric.dumazet@...il.com, netdev@...r.kernel.org,
	edumazet@...gle.com, jiri@...lanox.com, daniel@...earbox.net,
	ast@...mgrid.com, aduyck@...antis.com, tom@...bertland.com,
	peterz@...radead.org, mingo@...nel.org, hannes@...essinduktion.org,
	linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH 0/2] net: threadable napi poll loop

From: Rik van Riel <riel@...hat.com>
Date: Tue, 10 May 2016 16:50:56 -0400

> On Tue, 2016-05-10 at 16:45 -0400, David Miller wrote:
>> From: Paolo Abeni <pabeni@...hat.com>
>> Date: Tue, 10 May 2016 22:22:50 +0200
>> 
>> > On Tue, 2016-05-10 at 09:08 -0700, Eric Dumazet wrote:
>> >> On Tue, 2016-05-10 at 18:03 +0200, Paolo Abeni wrote:
>> >> 
>> >> > If a single core host is under network flood, i.e. ksoftirqd is
>> >> > scheduled and it eventually (after processing ~640 packets) will
>> let the
>> >> > user space process run. The latter will execute a syscall to
>> receive a
>> >> > packet, which will have to disable/enable bh at least once and
>> that will
>> >> > cause the processing of another ~640 packets. To receive a
>> single packet
>> >> > in user space, the kernel has to process more than one thousand
>> packets.
>> >> 
>> >> Looks you found the bug then. Have you tried to fix it ?
>>  ...
>> > The ksoftirq and the local_bh_enable() design are the root of the
>> > problem, they need to be touched/affected to solve it.
>> 
>> That's not what I read from your description, processing 640 packets
>> before going to ksoftirqd seems to the be the absolute root problem.
> 
> What would a fix for that look like?
> 
> Keep track of the number of processed incoming packets,
> and the number of packets handed off, and defer to
> ksoftirqd earlier if the statistics suggest packets are
> getting dropped on the floor?

Not by packet count but by something more easily to measure and
scalable to fairness like processing time.

Powered by blists - more mailing lists