lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 13 Jul 2018 11:14:51 -0700
From:   Eric Dumazet <eric.dumazet@...il.com>
To:     Edward Cree <ecree@...arflare.com>,
        Or Gerlitz <gerlitz.or@...il.com>,
        Jesper Dangaard Brouer <brouer@...hat.com>
Cc:     Saeed Mahameed <saeedm@...lanox.com>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: Re: [net-next PATCH] net: ipv4: fix listify ip_rcv_finish in case of
 forwarding



On 07/13/2018 07:19 AM, Edward Cree wrote:
> On 12/07/18 21:10, Or Gerlitz wrote:
>> On Wed, Jul 11, 2018 at 11:06 PM, Jesper Dangaard Brouer
>> <brouer@...hat.com> wrote:
>>> One reason I didn't "just" send a patch, is that Edward so-fare only
>>> implemented netif_receive_skb_list() and not napi_gro_receive_list().
>> sfc does't support gro?! doesn't make sense.. Edward?
> sfc has a flag EFX_RX_PKT_TCP set according to bits in the RX event, we
>  call napi_{get,gro}_frags() (via efx_rx_packet_gro()) for TCP packets and
>  netif_receive_skb() (or now the list handling) (via efx_rx_deliver()) for
>  non-TCP packets.  So we avoid the GRO overhead for non-TCP workloads.
> 
>> Same TCP performance
>>
>> with GRO and no rx-batching
>>
>> or
>>
>> without GRO and yes rx-batching
>>
>> is by far not intuitive result
> I'm also surprised by this.  If I can find the time I'll try to do similar
>  experiments on sfc.
> Jesper, are the CPU utilisations similar in both cases?  You're sure your
>  stream isn't TX-limited?

1) Make sure to test the case where packets of X flows are interleaved on the wire,
instead of being nice with the receiver (trains of packets for each flow)

(Typical case on a fabric, since switches will mix the ingress traffic to one egress port)

2) Do not test TCP_STREAM traffic, but TCP_RR
(RPC like traffic where GRO really cuts down number of ACK packets)

  TCP_STREAM can hide the GRO gain, since ACK are naturally decimated under sufficient
  load.



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ