lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8c12e15c-26b1-0028-e023-86bb62c7d60b@mellanox.com>
Date:   Tue, 12 Feb 2019 14:58:34 +0000
From:   Tariq Toukan <tariqt@...lanox.com>
To:     Jesper Dangaard Brouer <brouer@...hat.com>
CC:     Eric Dumazet <eric.dumazet@...il.com>,
        Ilias Apalodimas <ilias.apalodimas@...aro.org>,
        Matthew Wilcox <willy@...radead.org>,
        David Miller <davem@...emloft.net>,
        "toke@...hat.com" <toke@...hat.com>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "mgorman@...hsingularity.net" <mgorman@...hsingularity.net>,
        "linux-mm@...ck.org" <linux-mm@...ck.org>
Subject: Re: [RFC, PATCH] net: page_pool: Don't use page->private to store
 dma_addr_t



On 2/12/2019 3:49 PM, Jesper Dangaard Brouer wrote:
> On Tue, 12 Feb 2019 12:39:59 +0000
> Tariq Toukan <tariqt@...lanox.com> wrote:
> 
>> On 2/11/2019 7:14 PM, Eric Dumazet wrote:
>>>
>>> On 02/11/2019 12:53 AM, Tariq Toukan wrote:
>>>>   
>>>    
>>>> Hi,
>>>>
>>>> It's great to use the struct page to store its dma mapping, but I am
>>>> worried about extensibility.
>>>> page_pool is evolving, and it would need several more per-page fields.
>>>> One of them would be pageref_bias, a planned optimization to reduce the
>>>> number of the costly atomic pageref operations (and replace existing
>>>> code in several drivers).
>>>>   
>>>
>>> But the point about pageref_bias is to place it in a different
>>> cache line than "struct page"
> 
> Yes, exactly.
> 
> 
>>> The major cost is having a cache line bouncing between producer and
>>> consumer.
>>
>> pageref_bias is meant to be dirtied only by the page requester, i.e. the
>> NIC driver / page_pool.
>> All other components (basically, SKB release flow / put_page) should
>> continue working with the atomic page_refcnt, and not dirty the
>> pageref_bias.
>>
>> However, what bothers me more is another issue.
>> The optimization doesn't cleanly combine with the new page_pool
>> direction for maintaining a queue for "available" pages, as the put_page
>> flow would need to read pageref_bias, asynchronously, and act accordingly.
>>
>> The suggested hook in put_page (to catch the 2 -> 1 "biased refcnt"
>> transition) causes a problem to the traditional pageref_bias idea, as it
>> implies a new point in which the pageref_bias field is read
>> *asynchronously*. This would risk missing the this critical 2 -> 1
>> transition! Unless pageref_bias is atomic...
> 
> I want to stop you here...
> 
> It seems to me that you are trying to shoehorn in a refcount
> optimization into page_pool.  The page_pool is optimized for the XDP
> case of one-frame-per-page, where we can avoid changing the refcount,
> and tradeoff memory usage for speed.  It is compatible with the elevated
> refcount usage, but that is not the optimization target.
> 
> If the case you are optimizing for is "packing" more frames in a page,
> then the page_pool might be the wrong choice.  To me it would make more
> sense to create another enum xdp_mem_type, that generalize the
> pageref_bias tricks also used by some drivers.
> 

Hi Jesper,

We share the same interest, I tried to combine the pageref_bias 
optimization on top of the put_page hook, but turns out it doesn't fit. 
That's all.

Of course, I am aware of the fact that page_pool is optimized for XDP 
use cases. But, as drivers prefer a single flow for their 
page-allocation management, rather than having several allocation/free 
methods depending on whether XDP program is loaded or not, the 
performance for non-XDP flow also matters.
I know you're not ignoring this, the fact that you're adding 
compatibility for the elevated refcount usage is a key step in this 
direction.

Another key benefit of page_pool is providing a netdev-optimized API 
that can replace the page allocation / dma mapping logic of the 
different drivers, and take it into one common shared unit.
This helps remove many LOCs from drivers, significantly improves 
modularity, and eases the support of new optimizations.
By improving the general non-XDP flow (packing several packets in a 
page) you encourage more and more drivers to do the transition.

We all look to further improve the page-pool performance. The 
pageref_bias idea does not fit, that's fine.
We can still introduce an API for bulk page-allocation, it will improve 
both XDP and non-XDP flows.

Regards,
Tariq

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ