lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <22866952-6bdf-2529-91a1-fb31bd2f2c2d@mellanox.com>
Date:   Mon, 12 Mar 2018 12:16:40 +0200
From:   Tariq Toukan <tariqt@...lanox.com>
To:     Tariq Toukan <tariqt@...lanox.com>,
        Jesper Dangaard Brouer <brouer@...hat.com>,
        netdev@...r.kernel.org,
        BjörnTöpel <bjorn.topel@...el.com>,
        magnus.karlsson@...el.com
Cc:     eugenia@...lanox.com, Jason Wang <jasowang@...hat.com>,
        John Fastabend <john.fastabend@...il.com>,
        Eran Ben Elisha <eranbe@...lanox.com>,
        Saeed Mahameed <saeedm@...lanox.com>, galp@...lanox.com,
        Daniel Borkmann <borkmann@...earbox.net>,
        Alexei Starovoitov <alexei.starovoitov@...il.com>
Subject: Re: [bpf-next V3 PATCH 13/15] mlx5: use page_pool for
 xdp_return_frame call



On 12/03/2018 12:08 PM, Tariq Toukan wrote:
> 
> 
> On 09/03/2018 10:56 PM, Jesper Dangaard Brouer wrote:
>> This patch shows how it is possible to have both the driver local page
>> cache, which uses elevated refcnt for "catching"/avoiding SKB
>> put_page.  And at the same time, have pages getting returned to the
>> page_pool from ndp_xdp_xmit DMA completion.
>>
>> Performance is surprisingly good. Tested DMA-TX completion on ixgbe,
>> that calls "xdp_return_frame", which call page_pool_put_page().
>> Stats show DMA-TX-completion runs on CPU#9 and mlx5 RX runs on CPU#5.
>> (Internally page_pool uses ptr_ring, which is what gives the good
>> cross CPU performance).
>>
>> Show adapter(s) (ixgbe2 mlx5p2) statistics (ONLY that changed!)
>> Ethtool(ixgbe2  ) stat:    732863573 (    732,863,573) <= tx_bytes /sec
>> Ethtool(ixgbe2  ) stat:    781724427 (    781,724,427) <= tx_bytes_nic 
>> /sec
>> Ethtool(ixgbe2  ) stat:     12214393 (     12,214,393) <= tx_packets /sec
>> Ethtool(ixgbe2  ) stat:     12214435 (     12,214,435) <= tx_pkts_nic 
>> /sec
>> Ethtool(mlx5p2  ) stat:     12211786 (     12,211,786) <= 
>> rx3_cache_empty /sec
>> Ethtool(mlx5p2  ) stat:     36506736 (     36,506,736) <= 
>> rx_64_bytes_phy /sec
>> Ethtool(mlx5p2  ) stat:   2336430575 (  2,336,430,575) <= rx_bytes_phy 
>> /sec
>> Ethtool(mlx5p2  ) stat:     12211786 (     12,211,786) <= 
>> rx_cache_empty /sec
>> Ethtool(mlx5p2  ) stat:     22823073 (     22,823,073) <= 
>> rx_discards_phy /sec
>> Ethtool(mlx5p2  ) stat:      1471860 (      1,471,860) <= 
>> rx_out_of_buffer /sec
>> Ethtool(mlx5p2  ) stat:     36506715 (     36,506,715) <= 
>> rx_packets_phy /sec
>> Ethtool(mlx5p2  ) stat:   2336542282 (  2,336,542,282) <= 
>> rx_prio0_bytes /sec
>> Ethtool(mlx5p2  ) stat:     13683921 (     13,683,921) <= 
>> rx_prio0_packets /sec
>> Ethtool(mlx5p2  ) stat:    821015537 (    821,015,537) <= 
>> rx_vport_unicast_bytes /sec
>> Ethtool(mlx5p2  ) stat:     13683608 (     13,683,608) <= 
>> rx_vport_unicast_packets /sec
>>
>> Before this patch: single flow performance was 6Mpps, and if I started
>> two flows the collective performance drop to 4Mpps, because we hit the
>> page allocator lock (further negative scaling occurs).
>>
>> V2: Adjustments requested by Tariq
>>   - Changed page_pool_create return codes not return NULL, only
>>     ERR_PTR, as this simplifies err handling in drivers.
>>   - Save a branch in mlx5e_page_release
>>   - Correct page_pool size calc for MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ
>>
>> Signed-off-by: Jesper Dangaard Brouer <brouer@...hat.com>
>> ---
> 
> I am running perf tests with your series. I sense a drastic degradation 
> in regular TCP flows, I'm double checking the numbers now...

Well, there's a huge performance degradation indeed, whenever the 
regular flows (non-XDP) use the new page pool. Cannot merge before 
fixing this.

If I disable the local page-cache, numbers get as low as 100's of Mbps 
in TCP stream tests.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ