lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 08 Dec 2016 20:45:50 +0100
From:   Hannes Frederic Sowa <hannes@...essinduktion.org>
To:     Eric Dumazet <edumazet@...gle.com>,
        "David S . Miller" <davem@...emloft.net>
Cc:     netdev <netdev@...r.kernel.org>, Paolo Abeni <pabeni@...hat.com>,
        Eric Dumazet <eric.dumazet@...il.com>
Subject: Re: [PATCH v3 net-next 1/4] udp: add busylocks in RX path

Hi Eric,

On Thu, Dec 8, 2016, at 20:41, Eric Dumazet wrote:
> Idea of busylocks is to let producers grab an extra spinlock
> to relieve pressure on the receive_queue spinlock shared by consumer.
> 
> This behavior is requested only once socket receive queue is above
> half occupancy.
> 
> Under flood, this means that only one producer can be in line
> trying to acquire the receive_queue spinlock.
> 
> These busylock can be allocated on a per cpu manner, instead of a
> per socket one (that would consume a cache line per socket)
> 
> This patch considerably improves UDP behavior under stress,
> depending on number of NIC RX queues and/or RPS spread.

This patch mostly improves situation for non-connected sockets. Do you
think it makes sense to acquire the spinlock depending on the sockets
state? Connected UDP sockets flow in on one CPU anyway?

Otherwise the series looks really great, thanks!

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ