lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e6bda1486e3787b6aeac4024d30df97910366028.camel@redhat.com>
Date: Tue, 03 Oct 2023 10:04:58 +0200
From: Paolo Abeni <pabeni@...hat.com>
To: David Ahern <dsahern@...nel.org>, Eric Dumazet <edumazet@...gle.com>, 
	"David S . Miller"
	 <davem@...emloft.net>, Jakub Kicinski <kuba@...nel.org>
Cc: Neal Cardwell <ncardwell@...gle.com>, Yuchung Cheng <ycheng@...gle.com>,
  netdev@...r.kernel.org, eric.dumazet@...il.com
Subject: Re: [PATCH net-next 4/4] tcp_metrics: optimize
 tcp_metrics_flush_all()

On Sat, 2023-09-23 at 13:07 +0200, David Ahern wrote:
> On 9/22/23 4:03 PM, Eric Dumazet wrote:
> > This is inspired by several syzbot reports where
> > tcp_metrics_flush_all() was seen in the traces.
> > 
> > We can avoid acquiring tcp_metrics_lock for empty buckets,
> > and we should add one cond_resched() to break potential long loops.
> > 
> > Signed-off-by: Eric Dumazet <edumazet@...gle.com>
> > ---
> >  net/ipv4/tcp_metrics.c | 7 +++++--
> >  1 file changed, 5 insertions(+), 2 deletions(-)
> > 
> > diff --git a/net/ipv4/tcp_metrics.c b/net/ipv4/tcp_metrics.c
> > index 7aca12c59c18483f42276d01252ed0fac326e5d8..c2a925538542b5d787596b7d76705dda86cf48d8 100644
> > --- a/net/ipv4/tcp_metrics.c
> > +++ b/net/ipv4/tcp_metrics.c
> > @@ -898,11 +898,13 @@ static void tcp_metrics_flush_all(struct net *net)
> >  	unsigned int row;
> >  
> >  	for (row = 0; row < max_rows; row++, hb++) {
> > -		struct tcp_metrics_block __rcu **pp;
> > +		struct tcp_metrics_block __rcu **pp = &hb->chain;
> >  		bool match;
> >  
> > +		if (!rcu_access_pointer(*pp))
> > +			continue;
> > +
> >  		spin_lock_bh(&tcp_metrics_lock);
> > -		pp = &hb->chain;
> >  		for (tm = deref_locked(*pp); tm; tm = deref_locked(*pp)) {
> >  			match = net ? net_eq(tm_net(tm), net) :
> >  				!refcount_read(&tm_net(tm)->ns.count);
> > @@ -914,6 +916,7 @@ static void tcp_metrics_flush_all(struct net *net)
> >  			}
> >  		}
> >  		spin_unlock_bh(&tcp_metrics_lock);
> > +		cond_resched();
> 
> I have found cond_resched() can occur some unnecessary overhead if
> called too often. Wrap in `if (need_resched)`?

Interesting. I could not find any significant overhead with code
inspection - it should be a matter of 2 conditionals instead of one -
Any idea why?

In any case I think we can follow-up with that if needed - e.g. no
changes required here.

Cheers,

Paolo


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ