lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 17 Sep 2019 13:20:21 -0700
From:   Josh Hunt <>
To:     netdev <>,
        Eric Dumazet <>,
        David Miller <>,
        Willem de Bruijn <>
Subject: udp sendmsg ENOBUFS clarification

I was running some tests recently with the udpgso_bench_tx benchmark in 
selftests and noticed that in some configurations it reported sending 
more than line rate! Looking into it more I found that I was overflowing 
the qdisc queue and so it was sending back NET_XMIT_DROP however this 
error did not propagate back up to the application and so it assumed 
whatever it sent was done successfully. That's when I learned about 
IP_RECVERR and saw that the benchmark isn't using that socket option.

That's all fairly straightforward, but what I was hoping to get 
clarification on is where is the line drawn on when or when not to send 
ENOBUFS back to the application if IP_RECVERR is *not* set? My guess 
based on going through the code is that as long as the packet leaves the 
stack (in this case sent to the qdisc) that's where we stop reporting 
ENOBUFS back to the application, but can someone confirm?

For example, we sanitize the error in udp_send_skb():
         err = ip_send_skb(sock_net(sk), skb);
         if (err) {
                 if (err == -ENOBUFS && !inet->recverr) {
                                       UDP_MIB_SNDBUFERRORS, is_udplite);
                         err = 0;
         } else

but in udp_sendmsg() we don't:

         if (err == -ENOBUFS || test_bit(SOCK_NOSPACE, 
&sk->sk_socket->flags)) {
                               UDP_MIB_SNDBUFERRORS, is_udplite);
         return err;

In the case above it looks like we may only get ENOBUFS for allocation 
failures inside of the stack in udp_sendmsg() and so that's why we 
propagate the error back up to the application?

Somewhat related, while I was trying to find answer to the above I came 
across this thread It looks 
like the man send() man page still only says the following about -ENOBUFS:

  "The output queue for a network interface was full.
   This generally indicates that the interface has stopped sending,
   but may be caused by transient congestion.
   (Normally, this does not occur in Linux. Packets are just silently
   dropped when a device queue overflows.) "

but as Eric points out that's not true when IP_RECVERR is set on the 
socket. Was there an attempt to update the man page to reflect this, but 
it was rejected? I couldn't find any discussion on this.


Powered by blists - more mailing lists