lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <420dc52e-8b39-a083-941e-d87eb941771a@fb.com>
Date:   Fri, 24 Mar 2017 16:26:25 -0700
From:   Alexei Starovoitov <ast@...com>
To:     Saeed Mahameed <saeedm@...lanox.com>,
        "David S. Miller" <davem@...emloft.net>
CC:     <netdev@...r.kernel.org>, <kernel-team@...com>
Subject: Re: [PATCH net-next 00/12] Mellanox mlx5e XDP performance
 optimization

On 3/24/17 2:52 PM, Saeed Mahameed wrote:
> Hi Dave,
>
> This series provides some preformancee optimizations for mlx5e
> driver, especially for XDP TX flows.
>
> 1st patch is a simple change of rmb to dma_rmb in CQE fetch routine
> which shows a huge gain for both RX and TX packet rates.
>
> 2nd patch removes write combining logic from the driver TX handler
> and simplifies the TX logic while improving TX CPU utilization.
>
> All other patches combined provide some refactoring to the driver TX
> flows to allow some significant XDP TX improvements.
>
> More details and performance numbers per patch can be found in each patch
> commit message compared to the preceding patch.
>
> Overall performance improvemnets
>   System: Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz
>
> Test case                   Baseline      Now      improvement
> ---------------------------------------------------------------
> TX packets (24 threads)     45Mpps        54Mpps      20%
> TC stack Drop (1 core)      3.45Mpps      3.6Mpps     5%
> XDP Drop      (1 core)      14Mpps        16.9Mpps    20%
> XDP TX        (1 core)      10.4Mpps      13.7Mpps    31%

Excellent work!
All patches look great, so for the series:
Acked-by: Alexei Starovoitov <ast@...nel.org>

in patch 12 I noticed that inline_mode is being evaluated.
I think for xdp queues it's guaranteed to be fixed.
Can we optimize that path little bit more as well?
Thanks!

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ