lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 23 Sep 2016 13:35:59 +0200
From:   Daniel Borkmann <daniel@...earbox.net>
To:     David Miller <davem@...emloft.net>, eric.dumazet@...il.com
CC:     peterz@...radead.org, riel@...hat.com, pabeni@...hat.com,
        hannes@...hat.com, jbrouer@...hat.com,
        linux-kernel@...r.kernel.org, netdev@...r.kernel.org,
        corbet@....net
Subject: Re: [PATCH] softirq: let ksoftirqd do its job

On 09/02/2016 08:39 AM, David Miller wrote:
> From: Eric Dumazet <eric.dumazet@...il.com>
> Date: Wed, 31 Aug 2016 10:42:29 -0700
>
>> From: Eric Dumazet <edumazet@...gle.com>
>>
>> A while back, Paolo and Hannes sent an RFC patch adding threaded-able
>> napi poll loop support : (https://patchwork.ozlabs.org/patch/620657/)
>>
>> The problem seems to be that softirqs are very aggressive and are often
>> handled by the current process, even if we are under stress and that
>> ksoftirqd was scheduled, so that innocent threads would have more chance
>> to make progress.
>>
>> This patch makes sure that if ksoftirq is running, we let it
>> perform the softirq work.
>>
>> Jonathan Corbet summarized the issue in https://lwn.net/Articles/687617/
>>
>> Tested:
>>
>>   - NIC receiving traffic handled by CPU 0
>>   - UDP receiver running on CPU 0, using a single UDP socket.
>>   - Incoming flood of UDP packets targeting the UDP socket.
>>
>> Before the patch, the UDP receiver could almost never get cpu cycles and
>> could only receive ~2,000 packets per second.
>>
>> After the patch, cpu cycles are split 50/50 between user application and
>> ksoftirqd/0, and we can effectively read ~900,000 packets per second,
>> a huge improvement in DOS situation. (Note that more packets are now
>> dropped by the NIC itself, since the BH handlers get less cpu cycles to
>> drain RX ring buffer)
>>
>> Since the load runs in well identified threads context, an admin can
>> more easily tune process scheduling parameters if needed.
>>
>> Reported-by: Paolo Abeni <pabeni@...hat.com>
>> Reported-by: Hannes Frederic Sowa <hannes@...essinduktion.org>
>> Signed-off-by: Eric Dumazet <edumazet@...gle.com>
>
> I'm just kind of assuming this won't go through my tree, but I can take
> it if that's what everyone agrees to.

Was this actually picked up somewhere in the mean time?

Powered by blists - more mailing lists