lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 26 Jul 2012 15:13:12 -0700
From:	Stephen Hemminger <shemminger@...tta.com>
To:	Alexander Duyck <alexander.duyck@...il.com>
Cc:	David Miller <davem@...emloft.net>, eric.dumazet@...il.com,
	netdev@...r.kernel.org
Subject: Re: [PATCH 00/16] Remove the ipv4 routing cache

On Thu, 26 Jul 2012 15:03:39 -0700
Alexander Duyck <alexander.duyck@...il.com> wrote:

> On Thu, Jul 26, 2012 at 2:06 PM, David Miller <davem@...emloft.net> wrote:
> > From: Alexander Duyck <alexander.duyck@...il.com>
> > Date: Thu, 26 Jul 2012 11:26:26 -0700
> >
> >> The previous results were with a slight modifications to your earlier
> >> patch.  With this patch applied I am seeing 10.4Mpps with 8 queues,
> >> reaching a maximum of 11.6Mpps with 9 queues.
> >
> > For fun you might want to see what this patch does for your tests,
> > it should cut the number of fib_table_lookup() calls roughly in half.
> 
> So with your patch, Eric's patch, and this most recent patch we are
> now at 11.8Mpps with 8 or 9 queues.  At this point I am staring to hit
> the hardware limits since 82599 will typically max out at about 12Mpps
> w/ 9 queues.
> 
> Here is the latest perf results with all of these patches in place.
> As you predicted your patch essentially cut the lookup overhead in
> half:
>     10.65%  [k] ixgbe_poll
>      7.77%  [k] fib_table_lookup
>      6.21%  [k] ixgbe_xmit_frame_ring
>      6.08%  [k] __netif_receive_skb
>      4.41%  [k] _raw_spin_lock
>      3.95%  [k] kmem_cache_free
>      3.30%  [k] build_skb
>      3.17%  [k] memcpy
>      2.96%  [k] dev_queue_xmit
>      2.79%  [k] ip_finish_output
>      2.66%  [k] kmem_cache_alloc
>      2.57%  [k] check_leaf
>      2.52%  [k] ip_route_input_noref
>      2.50%  [k] netdev_alloc_frag
>      2.17%  [k] ip_rcv
>      2.16%  [k] __phys_addr
> 
> I will probably do some more poking around over the next few days in
> order to get my head around the fib_table_lookup overhead.
> 
> Thanks,
> 
> Alex
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

The fib trie stats are global, you may want to either disable CONFIG_IP_FIB_TRIE_STATS
or convert them to per-cpu.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ