lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20200221071255.GA863284@apalos.home>
Date:   Fri, 21 Feb 2020 09:12:55 +0200
From:   Ilias Apalodimas <ilias.apalodimas@...aro.org>
To:     Randy Dunlap <rdunlap@...radead.org>
Cc:     brouer@...hat.com, davem@...emloft.net, netdev@...r.kernel.org,
        lorenzo@...nel.org, toke@...hat.com
Subject: Re: [PATCH net-next] net: page_pool: Add documentation for page_pool
 API

Hi Randy, 
On Thu, Feb 20, 2020 at 04:14:00PM -0800, Randy Dunlap wrote:
> Hi again Ilias,
> 
> On 2/20/20 10:25 AM, Ilias Apalodimas wrote:
> > Add documentation explaining the basic functionality and design
> > principles of the API
> > 
> > Signed-off-by: Ilias Apalodimas <ilias.apalodimas@...aro.org>
> > ---
> >  Documentation/networking/page_pool.rst | 159 +++++++++++++++++++++++++
> >  1 file changed, 159 insertions(+)
> >  create mode 100644 Documentation/networking/page_pool.rst
> > 
> > diff --git a/Documentation/networking/page_pool.rst b/Documentation/networking/page_pool.rst
> > new file mode 100644
> > index 000000000000..098d339ef272
> > --- /dev/null
> > +++ b/Documentation/networking/page_pool.rst
> > @@ -0,0 +1,159 @@
> > +.. SPDX-License-Identifier: GPL-2.0
> > +
> > +=============
> > +Page Pool API
> > +=============
> > +
> > +The page_pool allocator is optimized for the XDP mode that uses one frame
> > +per-page, but it can fallback on the regular page allocator APIs.
> > +
> > +Basic use involve replacing alloc_pages() calls with the
> 
>              involves
> 

Ok

> > +page_pool_alloc_pages() call.  Drivers should use page_pool_dev_alloc_pages()
> > +replacing dev_alloc_pages().
> > +
> ...
> 
> > +
> > +Architecture overview
> > +=====================
> > +
> > +.. code-block:: none
> > +
> ...
> 
> > +
> > +API interface
> > +=============
> > +The number of pools created **must** match the number of hardware queues
> > +unless hardware restrictions make that impossible. This would otherwise beat the
> > +purpose of page pool, which is allocate pages fast from cache without locking.
> > +This lockless guarantee naturally comes from running under a NAPI softirq.
> > +The protection doesn't strictly have to be NAPI, any guarantee that allocating
> > +a page will cause no race conditions is enough.
> > +
> > +* page_pool_create(): Create a pool.
> > +    * flags:      PP_FLAG_DMA_MAP, PP_FLAG_DMA_SYNC_DEV
> > +    * order:      order^n pages on allocation
> 
> what is "n" above?
> My quick reading of mm/page_alloc.c suggests that order is the power of 2
> that should be used for the memory allocation... ???

Yes this must change to 2^order

> 
> > +    * pool_size:  size of the ptr_ring
> > +    * nid:        preferred NUMA node for allocation
> > +    * dev:        struct device. Used on DMA operations
> > +    * dma_dir:    DMA direction
> > +    * max_len:    max DMA sync memory size
> > +    * offset:     DMA address offset
> > +
> ...
> 
> > +
> > +Coding examples
> > +===============
> > +
> > +Registration
> > +------------
> > +
> > +.. code-block:: c
> > +
> > +    /* Page pool registration */
> > +    struct page_pool_params pp_params = { 0 };
> > +    struct xdp_rxq_info xdp_rxq;
> > +    int err;
> > +
> > +    pp_params.order = 0;
> 
> so 0^n?

See above!

> 
> > +    /* internal DMA mapping in page_pool */
> > +    pp_params.flags = PP_FLAG_DMA_MAP;
> > +    pp_params.pool_size = DESC_NUM;
> > +    pp_params.nid = NUMA_NO_NODE;
> > +    pp_params.dev = priv->dev;
> > +    pp_params.dma_dir = xdp_prog ? DMA_BIDIRECTIONAL : DMA_FROM_DEVICE;
> > +    page_pool = page_pool_create(&pp_params);
> > +
> > +    err = xdp_rxq_info_reg(&xdp_rxq, ndev, 0);
> > +    if (err)
> > +        goto err_out;
> > +
> > +    err = xdp_rxq_info_reg_mem_model(&xdp_rxq, MEM_TYPE_PAGE_POOL, page_pool);
> > +    if (err)
> > +        goto err_out;
> > +
> > +NAPI poller
> > +-----------
> 
> thanks.

Thanks again for taking the time

> -- 
> ~Randy
> 

Cheers
/Ilias

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ