lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 5 Jun 2013 15:32:53 -0500
From:	Shawn Bohrer <sbohrer@...advisors.com>
To:	Eric Dumazet <eric.dumazet@...il.com>
Cc:	netdev@...r.kernel.org, davem@...emloft.net
Subject: Re: Performance regression from routing cache removal?

On Wed, Jun 05, 2013 at 11:13:17AM -0700, Eric Dumazet wrote:
> On Wed, 2013-06-05 at 12:57 -0500, Shawn Bohrer wrote:
> > I've got a performance regression that I've been trying to track down
> > for the last couple of days.  The last known good kernel was 3.4 and
> > now I'm testing 3.10 so there has been a lot of changes in between.
> > The workload I'm testing has a single machine receiving UDP multicast
> > packets across approximately 350 multicast groups.  We've got one
> > socket per multicast address receiving the data.  The traffic is small
> > packets and tends to be very bursty.  With the 3.10 kernel I'm seeing
> > occasional spikes in one-way latency that are in the 100-500
> > millisecond range, and regularly in the 10 millisecond range.  I've
> > started a git bisect which has narrowed me down to:
> > 
> > bad f5b0a8743601a4477419171f5046bd07d1c080a0
> > good fa0afcd10951afad2022dda09777d2bf70cdab3d
> > 
> > The remaining commits are all part of the routing cache removal so
> > I've stopped my bisection for now.  I can bisect further if anyone
> > thinks it will be valuable.
> 
> > 
> > Digging through the git history the ip_check_mc_rcu appears to be the
> > result of the routing cache removal. 
> 
> Yes, ip_check_mc_rcu() does a linear scan of your ~350 groups :
> 
> for_each_pmc_rcu(in_dev, im) {
>     if (im->multiaddr == mc_addr)
>           break;
> }

Indeed it does.  So what are my options here?

1) Use fewer multicast addresses.  This may not be possible since in
some cases I don't control the incoming data/addresses.  I am running
a test sending the same data over ~280 vs 350 and naturally it does
appear to be better.

2) Make that linear scan a hash lookup?  Are there any downsides or
reasons not to do this?

3) ?

Thanks,
Shawn

-- 

---------------------------------------------------------------
This email, along with any attachments, is confidential. If you 
believe you received this message in error, please contact the 
sender immediately and delete all copies of the message.  
Thank you.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ