lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 26 Oct 2020 10:51:56 +0100
From:   Paolo Abeni <pabeni@...hat.com>
To:     Menglong Dong <menglong8.dong@...il.com>, davem@...emloft.net
Cc:     kuznet@....inr.ac.ru, yoshfuji@...ux-ipv6.org, kuba@...nel.org,
        netdev@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] net: udp: increase UDP_MIB_RCVBUFERRORS when ENOBUFS

Hello,

On Mon, 2020-10-26 at 17:39 +0800, Menglong Dong wrote:
> The error returned from __udp_enqueue_schedule_skb is ENOMEM or ENOBUFS.
> For now, only ENOMEM is counted into UDP_MIB_RCVBUFERRORS in
> __udp_queue_rcv_skb. UDP_MIB_RCVBUFERRORS should count all of the
> failed skb because of memory errors during udp receiving, not just because of the limit of sock receive queue. We can see this
> in __udp4_lib_mcast_deliver:
> 
> 		nskb = skb_clone(skb, GFP_ATOMIC);
> 
> 		if (unlikely(!nskb)) {
> 			atomic_inc(&sk->sk_drops);
> 			__UDP_INC_STATS(net, UDP_MIB_RCVBUFERRORS,
> 					IS_UDPLITE(sk));
> 			__UDP_INC_STATS(net, UDP_MIB_INERRORS,
> 					IS_UDPLITE(sk));
> 			continue;
> 		}
> 
> See, UDP_MIB_RCVBUFERRORS is increased when skb clone failed. From this
> point, ENOBUFS from __udp_enqueue_schedule_skb should be counted, too.
> It means that the buffer used by all of the UDP sock is to the limit, and
> it ought to be counted.
> 
> Signed-off-by: Menglong Dong <menglong8.dong@...il.com>
> ---
>  net/ipv4/udp.c | 4 +---
>  net/ipv6/udp.c | 4 +---
>  2 files changed, 2 insertions(+), 6 deletions(-)
> 
> diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
> index 09f0a23d1a01..49a69d8d55b3 100644
> --- a/net/ipv4/udp.c
> +++ b/net/ipv4/udp.c
> @@ -2035,9 +2035,7 @@ static int __udp_queue_rcv_skb(struct sock *sk, struct sk_buff *skb)
>  		int is_udplite = IS_UDPLITE(sk);
>  
>  		/* Note that an ENOMEM error is charged twice */
> -		if (rc == -ENOMEM)
> -			UDP_INC_STATS(sock_net(sk), UDP_MIB_RCVBUFERRORS,
> -					is_udplite);
> +		UDP_INC_STATS(sock_net(sk), UDP_MIB_RCVBUFERRORS, is_udplite);
>  		UDP_INC_STATS(sock_net(sk), UDP_MIB_INERRORS, is_udplite);
>  		kfree_skb(skb);
>  		trace_udp_fail_queue_rcv_skb(rc, sk);
> diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
> index 29d9691359b9..d5e23b150fd9 100644
> --- a/net/ipv6/udp.c
> +++ b/net/ipv6/udp.c
> @@ -634,9 +634,7 @@ static int __udpv6_queue_rcv_skb(struct sock *sk, struct sk_buff *skb)
>  		int is_udplite = IS_UDPLITE(sk);
>  
>  		/* Note that an ENOMEM error is charged twice */
> -		if (rc == -ENOMEM)
> -			UDP6_INC_STATS(sock_net(sk),
> -					 UDP_MIB_RCVBUFERRORS, is_udplite);
> +		UDP6_INC_STATS(sock_net(sk), UDP_MIB_RCVBUFERRORS, is_udplite);
>  		UDP6_INC_STATS(sock_net(sk), UDP_MIB_INERRORS, is_udplite);
>  		kfree_skb(skb);
>  		return -1;

The diffstat is nice, but I'm unsure we can do this kind of change
(well, I really think we should not do it): it will fool any kind of
existing users (application, scripts, admin) currently reading the
above counters and expecting UDP_MIB_RCVBUFERRORS being increased with
the existing schema.

Cheers,

Paolo

Powered by blists - more mailing lists