lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 1 Sep 2016 14:01:05 +0200
From:   Hannes Frederic Sowa <hannes@...hat.com>
To:     Eric Dumazet <eric.dumazet@...il.com>,
        Peter Zijlstra <peterz@...radead.org>,
        David Miller <davem@...emloft.net>
Cc:     Rik van Riel <riel@...hat.com>, Paolo Abeni <pabeni@...hat.com>,
        Jesper Dangaard Brouer <jbrouer@...hat.com>,
        linux-kernel <linux-kernel@...r.kernel.org>,
        netdev <netdev@...r.kernel.org>, Jonathan Corbet <corbet@....net>
Subject: Re: [PATCH] softirq: let ksoftirqd do its job

On 31.08.2016 19:42, Eric Dumazet wrote:
> From: Eric Dumazet <edumazet@...gle.com>
> 
> A while back, Paolo and Hannes sent an RFC patch adding threaded-able
> napi poll loop support : (https://patchwork.ozlabs.org/patch/620657/) 
> 
> The problem seems to be that softirqs are very aggressive and are often
> handled by the current process, even if we are under stress and that
> ksoftirqd was scheduled, so that innocent threads would have more chance
> to make progress.
> 
> This patch makes sure that if ksoftirq is running, we let it
> perform the softirq work.
> 
> Jonathan Corbet summarized the issue in https://lwn.net/Articles/687617/
> 
> Tested:
> 
>  - NIC receiving traffic handled by CPU 0
>  - UDP receiver running on CPU 0, using a single UDP socket.
>  - Incoming flood of UDP packets targeting the UDP socket.
> 
> Before the patch, the UDP receiver could almost never get cpu cycles and
> could only receive ~2,000 packets per second.
> 
> After the patch, cpu cycles are split 50/50 between user application and
> ksoftirqd/0, and we can effectively read ~900,000 packets per second,
> a huge improvement in DOS situation. (Note that more packets are now
> dropped by the NIC itself, since the BH handlers get less cpu cycles to
> drain RX ring buffer)
> 
> Since the load runs in well identified threads context, an admin can
> more easily tune process scheduling parameters if needed.
> 
> Reported-by: Paolo Abeni <pabeni@...hat.com>
> Reported-by: Hannes Frederic Sowa <hannes@...essinduktion.org>
> Signed-off-by: Eric Dumazet <edumazet@...gle.com>
> Cc: David Miller <davem@...emloft.net
> Cc: Jesper Dangaard Brouer <jbrouer@...hat.com>
> Cc: Peter Zijlstra <peterz@...radead.org>
> Cc: Rik van Riel <riel@...hat.com>

Acked-by: Hannes Frederic Sowa <hannes@...essinduktion.org>

Thanks,
Hannes


Powered by blists - more mailing lists