lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <66f0da5a-388d-5ddc-4bb7-441f6df4af96@mellanox.com>
Date:   Tue, 20 Mar 2018 09:43:56 +0200
From:   Tariq Toukan <tariqt@...lanox.com>
To:     Jesper Dangaard Brouer <brouer@...hat.com>,
        Tariq Toukan <tariqt@...lanox.com>
Cc:     netdev@...r.kernel.org,
        BjörnTöpel <bjorn.topel@...el.com>,
        magnus.karlsson@...el.com, eugenia@...lanox.com,
        Jason Wang <jasowang@...hat.com>,
        John Fastabend <john.fastabend@...il.com>,
        Eran Ben Elisha <eranbe@...lanox.com>,
        Saeed Mahameed <saeedm@...lanox.com>, galp@...lanox.com,
        Daniel Borkmann <borkmann@...earbox.net>,
        Alexei Starovoitov <alexei.starovoitov@...il.com>
Subject: Re: [bpf-next V3 PATCH 13/15] mlx5: use page_pool for
 xdp_return_frame call



On 19/03/2018 3:12 PM, Jesper Dangaard Brouer wrote:
> On Mon, 12 Mar 2018 15:20:06 +0200 Tariq Toukan <tariqt@...lanox.com> wrote:
> 
>> On 12/03/2018 12:16 PM, Tariq Toukan wrote:
>>>
>>> On 12/03/2018 12:08 PM, Tariq Toukan wrote:
>>>>
>>>> On 09/03/2018 10:56 PM, Jesper Dangaard Brouer wrote:
>>>>> This patch shows how it is possible to have both the driver local page
>>>>> cache, which uses elevated refcnt for "catching"/avoiding SKB
>>>>> put_page.  And at the same time, have pages getting returned to the
>>>>> page_pool from ndp_xdp_xmit DMA completion.
>>>>>
> [...]
>>>>>
>>>>> Before this patch: single flow performance was 6Mpps, and if I started
>>>>> two flows the collective performance drop to 4Mpps, because we hit the
>>>>> page allocator lock (further negative scaling occurs).
>>>>>
>>>>> V2: Adjustments requested by Tariq
>>>>>    - Changed page_pool_create return codes not return NULL, only
>>>>>      ERR_PTR, as this simplifies err handling in drivers.
>>>>>    - Save a branch in mlx5e_page_release
>>>>>    - Correct page_pool size calc for MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ
>>>>>
>>>>> Signed-off-by: Jesper Dangaard Brouer <brouer@...hat.com>
>>>>> ---
>>>>
>>>> I am running perf tests with your series. I sense a drastic
>>>> degradation in regular TCP flows, I'm double checking the numbers now...
>>>
>>> Well, there's a huge performance degradation indeed, whenever the
>>> regular flows (non-XDP) use the new page pool. Cannot merge before
>>> fixing this.
>>>
>>> If I disable the local page-cache, numbers get as low as 100's of Mbps
>>> in TCP stream tests.
>>
>> It seems that the page-pool doesn't fit as a general fallback (when page
>> in local rx cache is busy), as the refcnt is elevated/changing:
> 
> I see the issue.  I have to go over the details in the driver, but I
> think it should be sufficient to remove the WARN().  When the page_pool
> was integrated with the MM-layer, being invoked from the put_page()
> call itself, this would indicate a likely API misuse.  But now, with
> the page refcnt based recycle tricks, it is the norm (for non-XDP) that
> put_page is called without the knowledge of page_pool.
> 
I see, I'll remove the WARN and test.

>   
>> [ 7343.086102] ------------[ cut here ]------------
>> [ 7343.086103] __page_pool_put_page() violating page_pool invariance refcnt:0
>> [ 7343.086114] WARNING: CPU: 1 PID: 17 at net/core/page_pool.c:291 __page_pool_put_page+0x7c/0xa0
> 
> Here page_pool actually catch the page refcnt race correctly, and does
> the proper handling of returning it to the page allocator (via __put_page).
> 
> I do notice (in the page_pool code) that in case page_pool handles DMA
> mapping (which isn't the case, yet), that I'm missing a DMA unmap
> release in the code.
> 
I didn't get this one. Both DMA map/unmap do not exist yet in page pool, no?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ