[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <712347db-fbfb-5e2e-813f-464c855fd624@mellanox.com>
Date: Thu, 22 Mar 2018 18:40:06 +0200
From: Tariq Toukan <tariqt@...lanox.com>
To: Jesper Dangaard Brouer <brouer@...hat.com>, netdev@...r.kernel.org,
BjörnTöpel <bjorn.topel@...el.com>,
magnus.karlsson@...el.com
Cc: eugenia@...lanox.com, Jason Wang <jasowang@...hat.com>,
John Fastabend <john.fastabend@...il.com>,
Eran Ben Elisha <eranbe@...lanox.com>,
Saeed Mahameed <saeedm@...lanox.com>, galp@...lanox.com,
Daniel Borkmann <borkmann@...earbox.net>,
Alexei Starovoitov <alexei.starovoitov@...il.com>,
Tariq Toukan <tariqt@...lanox.com>
Subject: Re: [bpf-next V4 PATCH 13/15] mlx5: use page_pool for
xdp_return_frame call
On 22/03/2018 4:22 PM, Jesper Dangaard Brouer wrote:
> This patch shows how it is possible to have both the driver local page
> cache, which uses elevated refcnt for "catching"/avoiding SKB
> put_page. And at the same time, have pages getting returned to the
> page_pool from ndp_xdp_xmit DMA completion.
>
> Performance is surprisingly good. Tested DMA-TX completion on ixgbe,
> that calls "xdp_return_frame", which call page_pool_put_page().
> Stats show DMA-TX-completion runs on CPU#9 and mlx5 RX runs on CPU#5.
> (Internally page_pool uses ptr_ring, which is what gives the good
> cross CPU performance).
>
> Show adapter(s) (ixgbe2 mlx5p2) statistics (ONLY that changed!)
> Ethtool(ixgbe2 ) stat: 732863573 ( 732,863,573) <= tx_bytes /sec
> Ethtool(ixgbe2 ) stat: 781724427 ( 781,724,427) <= tx_bytes_nic /sec
> Ethtool(ixgbe2 ) stat: 12214393 ( 12,214,393) <= tx_packets /sec
> Ethtool(ixgbe2 ) stat: 12214435 ( 12,214,435) <= tx_pkts_nic /sec
> Ethtool(mlx5p2 ) stat: 12211786 ( 12,211,786) <= rx3_cache_empty /sec
> Ethtool(mlx5p2 ) stat: 36506736 ( 36,506,736) <= rx_64_bytes_phy /sec
> Ethtool(mlx5p2 ) stat: 2336430575 ( 2,336,430,575) <= rx_bytes_phy /sec
> Ethtool(mlx5p2 ) stat: 12211786 ( 12,211,786) <= rx_cache_empty /sec
> Ethtool(mlx5p2 ) stat: 22823073 ( 22,823,073) <= rx_discards_phy /sec
> Ethtool(mlx5p2 ) stat: 1471860 ( 1,471,860) <= rx_out_of_buffer /sec
> Ethtool(mlx5p2 ) stat: 36506715 ( 36,506,715) <= rx_packets_phy /sec
> Ethtool(mlx5p2 ) stat: 2336542282 ( 2,336,542,282) <= rx_prio0_bytes /sec
> Ethtool(mlx5p2 ) stat: 13683921 ( 13,683,921) <= rx_prio0_packets /sec
> Ethtool(mlx5p2 ) stat: 821015537 ( 821,015,537) <= rx_vport_unicast_bytes /sec
> Ethtool(mlx5p2 ) stat: 13683608 ( 13,683,608) <= rx_vport_unicast_packets /sec
>
> Before this patch: single flow performance was 6Mpps, and if I started
> two flows the collective performance drop to 4Mpps, because we hit the
> page allocator lock (further negative scaling occurs).
>
> V2: Adjustments requested by Tariq
> - Changed page_pool_create return codes not return NULL, only
> ERR_PTR, as this simplifies err handling in drivers.
> - Save a branch in mlx5e_page_release
> - Correct page_pool size calc for MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ
>
> Signed-off-by: Jesper Dangaard Brouer <brouer@...hat.com>
> ---
Reviewed-by: Tariq Toukan <tariqt@...lanox.com>
Powered by blists - more mailing lists