lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <01bc9bf5-1780-2650-958f-961bd24b8c26@gmail.com>
Date:   Thu, 6 Sep 2018 19:32:02 -0700
From:   Eric Dumazet <eric.dumazet@...il.com>
To:     Edward Cree <ecree@...arflare.com>, davem@...emloft.net
Cc:     linux-net-drivers@...arflare.com, netdev@...r.kernel.org
Subject: Re: [PATCH v2 net-next 0/4] net: batched receive in GRO path



On 09/06/2018 07:24 AM, Edward Cree wrote:
> This series listifies part of GRO processing, in a manner which allows those
>  packets which are not GROed (i.e. for which dev_gro_receive returns
>  GRO_NORMAL) to be passed on to the listified regular receive path.
> I have not listified dev_gro_receive() itself, or the per-protocol GRO
>  callback, since GRO's need to hold packets on lists under napi->gro_hash
>  makes keeping the packets on other lists awkward, and since the GRO control
>  block state of held skbs can refer only to one 'new' skb at a time.
>  Nonetheless the batching of the calling code yields some performance gains
>  in the GRO case as well.
> 
> Herewith the performance figures obtained in a NetPerf TCP stream test (with
>  four streams, and irqs bound to a single core):
> net-next: 7.166 Gbit/s (sigma 0.435)
> after #2: 7.715 Gbit/s (sigma 0.145) = datum + 7.7%
> after #4: 7.890 Gbit/s (sigma 0.217) = datum + 10.1%
> (Note that the 'net-next' results were distinctly bimodal, with two results
>  of about 8 Gbit/s and the remaining ten around 7 Gbit/s.  I don't have a
>  good explanation for this.)
> And with GRO disabled through ethtool -K (thus simulating traffic which is
>  not GRO-able but, being TCP, is still passed to the GRO entry point):
> net-next: 4.756 Gbit/s (sigma 0.240)
> after #4: 5.355 Gbit/s (sigma 0.232) = datum + 12.6%
> 
> v2: Rebased on latest net-next.  Removed RFC tags.  Otherwise unchanged
>  owing to lack of comments on v1.
>

Your performance numbers are not convincing, since TCP stream test should
get nominal GRO gains.

Adding this complexity and icache pressure needs more experimental results.

What about RPC workloads  (eg 100 concurrent netperf -t TCP_RR -- -r 8000,8000 )

Thanks.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ