[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180319141217.416d269a@redhat.com>
Date: Mon, 19 Mar 2018 14:12:17 +0100
From: Jesper Dangaard Brouer <brouer@...hat.com>
To: Tariq Toukan <tariqt@...lanox.com>
Cc: netdev@...r.kernel.org,
BjörnTöpel <bjorn.topel@...el.com>,
magnus.karlsson@...el.com, eugenia@...lanox.com,
Jason Wang <jasowang@...hat.com>,
John Fastabend <john.fastabend@...il.com>,
Eran Ben Elisha <eranbe@...lanox.com>,
Saeed Mahameed <saeedm@...lanox.com>, galp@...lanox.com,
Daniel Borkmann <borkmann@...earbox.net>,
Alexei Starovoitov <alexei.starovoitov@...il.com>,
brouer@...hat.com
Subject: Re: [bpf-next V3 PATCH 13/15] mlx5: use page_pool for
xdp_return_frame call
On Mon, 12 Mar 2018 15:20:06 +0200 Tariq Toukan <tariqt@...lanox.com> wrote:
> On 12/03/2018 12:16 PM, Tariq Toukan wrote:
> >
> > On 12/03/2018 12:08 PM, Tariq Toukan wrote:
> >>
> >> On 09/03/2018 10:56 PM, Jesper Dangaard Brouer wrote:
> >>> This patch shows how it is possible to have both the driver local page
> >>> cache, which uses elevated refcnt for "catching"/avoiding SKB
> >>> put_page. And at the same time, have pages getting returned to the
> >>> page_pool from ndp_xdp_xmit DMA completion.
> >>>
[...]
> >>>
> >>> Before this patch: single flow performance was 6Mpps, and if I started
> >>> two flows the collective performance drop to 4Mpps, because we hit the
> >>> page allocator lock (further negative scaling occurs).
> >>>
> >>> V2: Adjustments requested by Tariq
> >>> - Changed page_pool_create return codes not return NULL, only
> >>> ERR_PTR, as this simplifies err handling in drivers.
> >>> - Save a branch in mlx5e_page_release
> >>> - Correct page_pool size calc for MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ
> >>>
> >>> Signed-off-by: Jesper Dangaard Brouer <brouer@...hat.com>
> >>> ---
> >>
> >> I am running perf tests with your series. I sense a drastic
> >> degradation in regular TCP flows, I'm double checking the numbers now...
> >
> > Well, there's a huge performance degradation indeed, whenever the
> > regular flows (non-XDP) use the new page pool. Cannot merge before
> > fixing this.
> >
> > If I disable the local page-cache, numbers get as low as 100's of Mbps
> > in TCP stream tests.
>
> It seems that the page-pool doesn't fit as a general fallback (when page
> in local rx cache is busy), as the refcnt is elevated/changing:
I see the issue. I have to go over the details in the driver, but I
think it should be sufficient to remove the WARN(). When the page_pool
was integrated with the MM-layer, being invoked from the put_page()
call itself, this would indicate a likely API misuse. But now, with
the page refcnt based recycle tricks, it is the norm (for non-XDP) that
put_page is called without the knowledge of page_pool.
> [ 7343.086102] ------------[ cut here ]------------
> [ 7343.086103] __page_pool_put_page() violating page_pool invariance refcnt:0
> [ 7343.086114] WARNING: CPU: 1 PID: 17 at net/core/page_pool.c:291 __page_pool_put_page+0x7c/0xa0
Here page_pool actually catch the page refcnt race correctly, and does
the proper handling of returning it to the page allocator (via __put_page).
I do notice (in the page_pool code) that in case page_pool handles DMA
mapping (which isn't the case, yet), that I'm missing a DMA unmap
release in the code.
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
Powered by blists - more mailing lists