lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 01 Sep 2016 06:06:01 -0700
From:   Eric Dumazet <eric.dumazet@...il.com>
To:     Jesper Dangaard Brouer <brouer@...hat.com>
Cc:     Rick Jones <rick.jones2@....com>,
        Peter Zijlstra <peterz@...radead.org>,
        David Miller <davem@...emloft.net>,
        Rik van Riel <riel@...hat.com>,
        Paolo Abeni <pabeni@...hat.com>,
        Hannes Frederic Sowa <hannes@...hat.com>,
        linux-kernel <linux-kernel@...r.kernel.org>,
        netdev <netdev@...r.kernel.org>, Jonathan Corbet <corbet@....net>
Subject: Re: [PATCH] softirq: let ksoftirqd do its job

On Thu, 2016-09-01 at 12:38 +0200, Jesper Dangaard Brouer wrote:

> I see max queue of 47MBytes, and worse an average standing queue of
> 25Mbytes, which is really bad for the latency seen by the
> application. And having this much outstanding memory is also bad for
> CPU cache size effects, and stressing the memory allocator.
>  I'm actually using this huge queue "misconfig" to stress the page
> allocator and my page_pool implementation into worse case situations ;-)
> 

Since commit 95766fff6b9a78d11f ("[UDP]: Add memory accounting."),
it is dangerous to have a big SO_RCVBUF value, since it adds unexpected
recvmsg() latencies.

1) User thread locks the socket.
2) Gets one skb from receive queue
   3) incoming flood of UDP packets are processed by softirq
   4) Socket is found 'owned by the user'
   5) packets are parked into the 'socket backlog' up to the SO_RCVBUF
limit
6) User thread release the socket.
7)  It finds many skbs in the backlog and have to process them _all_ and
re-inject in socket receive queue.
8) return to user space.


Time spent in 7) can me in the order of millions of cpu cycles...

At least starting from 5413d1babe8f10d ("net: do not block BH while
processing socket backlog") we no longer block BH while doing 7) and we
have cond resched points.






Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ