lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <db5a01a1256d4cc5cf418cd6cb5b076fc959ae21.camel@redhat.com>
Date: Fri, 29 Mar 2024 11:22:40 +0100
From: Paolo Abeni <pabeni@...hat.com>
To: Eric Dumazet <edumazet@...gle.com>, "David S . Miller"
 <davem@...emloft.net>,  Jakub Kicinski <kuba@...nel.org>
Cc: Willem de Bruijn <willemb@...gle.com>, netdev@...r.kernel.org, 
	eric.dumazet@...il.com
Subject: Re: [PATCH net-next 3/4] udp: avoid calling sock_def_readable() if
 possible

On Thu, 2024-03-28 at 14:40 +0000, Eric Dumazet wrote:
> sock_def_readable() is quite expensive (particularly
> when ep_poll_callback() is in the picture).
> 
> We must call sk->sk_data_ready() when :
> 
> - receive queue was empty, or
> - SO_PEEK_OFF is enabled on the socket, or
> - sk->sk_data_ready is not sock_def_readable.
> 
> We still need to call sk_wake_async().
> 
> Signed-off-by: Eric Dumazet <edumazet@...gle.com>
> ---
>  net/ipv4/udp.c | 14 +++++++++++---
>  1 file changed, 11 insertions(+), 3 deletions(-)
> 
> diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
> index d2fa9755727ce034c2b4bca82bd9e72130d588e6..5dfbe4499c0f89f94af9ee1fb64559dd672c1439 100644
> --- a/net/ipv4/udp.c
> +++ b/net/ipv4/udp.c
> @@ -1492,6 +1492,7 @@ int __udp_enqueue_schedule_skb(struct sock *sk, struct sk_buff *skb)
>  	struct sk_buff_head *list = &sk->sk_receive_queue;
>  	int rmem, err = -ENOMEM;
>  	spinlock_t *busy = NULL;
> +	bool becomes_readable;
>  	int size, rcvbuf;
>  
>  	/* Immediately drop when the receive queue is full.
> @@ -1532,12 +1533,19 @@ int __udp_enqueue_schedule_skb(struct sock *sk, struct sk_buff *skb)
>  	 */
>  	sock_skb_set_dropcount(sk, skb);
>  
> +	becomes_readable = skb_queue_empty(list);
>  	__skb_queue_tail(list, skb);
>  	spin_unlock(&list->lock);
>  
> -	if (!sock_flag(sk, SOCK_DEAD))
> -		INDIRECT_CALL_1(sk->sk_data_ready, sock_def_readable, sk);
> -
> +	if (!sock_flag(sk, SOCK_DEAD)) {
> +		if (becomes_readable ||
> +		    sk->sk_data_ready != sock_def_readable ||
> +		    READ_ONCE(sk->sk_peek_off) >= 0)
> +			INDIRECT_CALL_1(sk->sk_data_ready,
> +					sock_def_readable, sk);
> +		else
> +			sk_wake_async(sk, SOCK_WAKE_WAITD, POLL_IN);
> +	}

I understood this change showed no performances benefit???

I guess the atomic_add_return() MB was hiding some/most of
sock_def_readable() cost?

Thanks!

Paolo


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ