lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e7a1d01a-6607-fa6f-33f8-db31a3fb75a8@kernel.org>
Date: Sat, 23 Sep 2023 13:07:46 +0200
From: David Ahern <dsahern@...nel.org>
To: Eric Dumazet <edumazet@...gle.com>, "David S . Miller"
 <davem@...emloft.net>, Jakub Kicinski <kuba@...nel.org>,
 Paolo Abeni <pabeni@...hat.com>
Cc: Neal Cardwell <ncardwell@...gle.com>, Yuchung Cheng <ycheng@...gle.com>,
 netdev@...r.kernel.org, eric.dumazet@...il.com
Subject: Re: [PATCH net-next 4/4] tcp_metrics: optimize
 tcp_metrics_flush_all()

On 9/22/23 4:03 PM, Eric Dumazet wrote:
> This is inspired by several syzbot reports where
> tcp_metrics_flush_all() was seen in the traces.
> 
> We can avoid acquiring tcp_metrics_lock for empty buckets,
> and we should add one cond_resched() to break potential long loops.
> 
> Signed-off-by: Eric Dumazet <edumazet@...gle.com>
> ---
>  net/ipv4/tcp_metrics.c | 7 +++++--
>  1 file changed, 5 insertions(+), 2 deletions(-)
> 
> diff --git a/net/ipv4/tcp_metrics.c b/net/ipv4/tcp_metrics.c
> index 7aca12c59c18483f42276d01252ed0fac326e5d8..c2a925538542b5d787596b7d76705dda86cf48d8 100644
> --- a/net/ipv4/tcp_metrics.c
> +++ b/net/ipv4/tcp_metrics.c
> @@ -898,11 +898,13 @@ static void tcp_metrics_flush_all(struct net *net)
>  	unsigned int row;
>  
>  	for (row = 0; row < max_rows; row++, hb++) {
> -		struct tcp_metrics_block __rcu **pp;
> +		struct tcp_metrics_block __rcu **pp = &hb->chain;
>  		bool match;
>  
> +		if (!rcu_access_pointer(*pp))
> +			continue;
> +
>  		spin_lock_bh(&tcp_metrics_lock);
> -		pp = &hb->chain;
>  		for (tm = deref_locked(*pp); tm; tm = deref_locked(*pp)) {
>  			match = net ? net_eq(tm_net(tm), net) :
>  				!refcount_read(&tm_net(tm)->ns.count);
> @@ -914,6 +916,7 @@ static void tcp_metrics_flush_all(struct net *net)
>  			}
>  		}
>  		spin_unlock_bh(&tcp_metrics_lock);
> +		cond_resched();

I have found cond_resched() can occur some unnecessary overhead if
called too often. Wrap in `if (need_resched)`?


Reviewed-by: David Ahern <dsahern@...nel.org>


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ