lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CABUuw65xvT0t+Eq881jCvU7yNg1W-PXZkHvaSDg891W_OP-2uw@mail.gmail.com>
Date:   Thu, 16 May 2019 17:42:02 -0400
From:   Adam Urban <adam.urban@...leguru.org>
To:     Eric Dumazet <eric.dumazet@...il.com>
Cc:     Willem de Bruijn <willemdebruijn.kernel@...il.com>,
        Network Development <netdev@...r.kernel.org>
Subject: Re: Kernel UDP behavior with missing destinations

How can I see if there is an active arp queue?

Regarding the qdisc, I don't think we're bumping up against that (at
least not in my tiny bench setup):

tc -s qdisc show
qdisc fq_codel 0: dev eth0 root refcnt 2 limit 10240p flows 1024
quantum 1514 target 5.0ms interval 100.0ms ecn
 Sent 925035443 bytes 8988011 pkt (dropped 0, overlimits 0 requeues 3)
 backlog 0b 0p requeues 3
  maxpacket 717 drop_overlimit 0 new_flow_count 1004 ecn_mark 0
  new_flows_len 0 old_flows_len 0

I'm not sure I still 100% understand the relationship between the
socket buffer (skb / wmem_default sysctl setting or SO_SNDBUF socket
option), arp queue (arp_queue), and the unres_qlen_bytes sysctl
setting. I've made a public google spreadsheet here to try and
calculate this value based on some input and assumptions. Can you take
a look and see if I got this somewhat correct?

https://docs.google.com/spreadsheets/d/1t9_UowY6sok8xvK8Tx_La_jB4iqpewJT5X4WANj39gg/edit?usp=sharing

On Thu, May 16, 2019 at 1:03 PM Eric Dumazet <eric.dumazet@...il.com> wrote:
>
>
>
> On 5/16/19 9:32 AM, Adam Urban wrote:
> > Eric, thanks. Increasing wmem_default from 229376 to 2293760 indeed
> > makes the issue go away on my test bench. What's a good way to
> > determine the optimal value here? I assume this is in bytes and needs
> > to be large enough so that the SO_SNDBUF doesn't fill up before the
> > kernel drops the packets. How often does that happen?
>
> You have to count the max number of arp queues your UDP socket could hit.
>
> Say this number is X
>
> Then wmem_default should be set  to X * unres_qlen_bytes + Y
>
> With Y =  229376  (the default  wmem_default)
>
> Then, you might need to increase the qdisc limits.
>
> If no arp queue is active, all UDP packets could be in the qdisc and might hit sooner
> the qdisc limit, thus dropping packets on the qdisc.
>
> (This is assuming your UDP application can blast packets at a rate above the link rate)
>
> >
> > On Thu, May 16, 2019 at 12:14 PM Eric Dumazet <eric.dumazet@...il.com> wrote:
> >>
> >>
> >>
> >> On 5/16/19 9:05 AM, Eric Dumazet wrote:
> >>
> >>> We probably should add a ttl on arp queues.
> >>>
> >>> neigh_probe() could do that quite easily.
> >>>
> >>
> >> Adam, all you need to do is to increase UDP socket sndbuf.
> >>
> >> Either by increasing /proc/sys/net/core/wmem_default
> >>
> >> or using setsockopt( ... SO_SNDBUF ... )
> >>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ