lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 16 Jan 2009 03:12:07 -0800
From:	Divy Le Ray <divy@...lsio.com>
To:	Herbert Xu <herbert@...dor.apana.org.au>
CC:	davem@...emloft.net, netdev@...r.kernel.org,
	Steve Wise <swise@...ngridcomputing.com>
Subject: Re: cxgb3: Replace LRO with GRO


> I presume the server is receiving?
Yes, it is receiving. The adapter is set up with one queue, its irq
pinned to cpu1, irq balance disabled. CPU1 is pegged in all tests.
> 
>> without the patch, lro off: 3.8Gbs
>> without the patch, lro on: 6.1Gbs
>> with the patch, GRO on: 4.8Gbs.
> 
> What about the case of GRO off with the patch? Just checking to
> make sure that nothing else has changed.

I'm getting about 3.4Gbs with the patch and GRO off.

I did some profiling with GRO on and LRO on. Here are typical outputs:
GRO on:
CPU 1 :
      26.445100  copy_user_generic_unrolled          vmlinux
      10.050300  skb_copy_datagram_iovec             vmlinux
       5.076900  memcpy                              vmlinux
       4.636400  process_responses                  cxgb3.ko
       4.625100  __pskb_pull_tail                    vmlinux
       4.463300  eth_type_trans                      vmlinux
       4.420600  inet_gro_receive                    vmlinux
       4.310500  refill_fl                          cxgb3.ko
       3.312700  skb_copy_bits                       vmlinux
       3.155300  put_page                            vmlinux
       2.818200  kfree                               vmlinux
       2.537300  tcp_gro_receive                     vmlinux
       2.422700  dev_gro_receive                     vmlinux
       2.229400  kmem_cache_alloc_node               vmlinux
       1.526000  skb_gro_receive                     vmlinux
       1.319200  free_hot_cold_page                  vmlinux
       1.224800  get_page_from_freelist              vmlinux
       1.112500  __alloc_skb                         vmlinux
       0.910200  kmem_cache_free                     vmlinux
       0.872000  napi_fraginfo_skb                   vmlinux

LRO on:
CPU 1 :
      48.511600  copy_user_generic_unrolled          vmlinux
       5.859600  put_page                            vmlinux
       4.405500  process_responses                  cxgb3.ko
       4.006400  refill_fl                          cxgb3.ko
       2.547200  irq_entries_start                   vmlinux
       2.315900  free_hot_cold_page                  vmlinux
       2.224400  skb_copy_datagram_iovec             vmlinux
       1.985400  tcp_recvmsg                         vmlinux
       1.311700  csum_partial                        vmlinux
       1.276200  _raw_spin_lock                      vmlinux
       1.088000  get_page_from_freelist              vmlinux
       1.016900  get_pageblock_flags_group           vmlinux
       0.920300  memcpy_toiovec                      vmlinux
       0.859200  kfree                               vmlinux
       0.688900  dst_release                         vmlinux
       0.630400  t3_sge_intr_msix_napi              cxgb3.ko
       0.559300  tcp_rcv_established                 vmlinux
       0.472800  kmem_cache_alloc_node               vmlinux
       0.404200  __inet_lookup_established           vmlinux
       0.399100  memset                              vmlinux

I have not looked at the code yet. It's getting late :)

Cheers,
Divy
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ