lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0bfe362b-276d-21ad-24b9-67813c0cd50a@infradead.org>
Date:   Thu, 20 Feb 2020 16:14:00 -0800
From:   Randy Dunlap <rdunlap@...radead.org>
To:     Ilias Apalodimas <ilias.apalodimas@...aro.org>, brouer@...hat.com,
        davem@...emloft.net, netdev@...r.kernel.org
Cc:     lorenzo@...nel.org, toke@...hat.com
Subject: Re: [PATCH net-next] net: page_pool: Add documentation for page_pool
 API

Hi again Ilias,

On 2/20/20 10:25 AM, Ilias Apalodimas wrote:
> Add documentation explaining the basic functionality and design
> principles of the API
> 
> Signed-off-by: Ilias Apalodimas <ilias.apalodimas@...aro.org>
> ---
>  Documentation/networking/page_pool.rst | 159 +++++++++++++++++++++++++
>  1 file changed, 159 insertions(+)
>  create mode 100644 Documentation/networking/page_pool.rst
> 
> diff --git a/Documentation/networking/page_pool.rst b/Documentation/networking/page_pool.rst
> new file mode 100644
> index 000000000000..098d339ef272
> --- /dev/null
> +++ b/Documentation/networking/page_pool.rst
> @@ -0,0 +1,159 @@
> +.. SPDX-License-Identifier: GPL-2.0
> +
> +=============
> +Page Pool API
> +=============
> +
> +The page_pool allocator is optimized for the XDP mode that uses one frame
> +per-page, but it can fallback on the regular page allocator APIs.
> +
> +Basic use involve replacing alloc_pages() calls with the

             involves

> +page_pool_alloc_pages() call.  Drivers should use page_pool_dev_alloc_pages()
> +replacing dev_alloc_pages().
> +
...

> +
> +Architecture overview
> +=====================
> +
> +.. code-block:: none
> +
...

> +
> +API interface
> +=============
> +The number of pools created **must** match the number of hardware queues
> +unless hardware restrictions make that impossible. This would otherwise beat the
> +purpose of page pool, which is allocate pages fast from cache without locking.
> +This lockless guarantee naturally comes from running under a NAPI softirq.
> +The protection doesn't strictly have to be NAPI, any guarantee that allocating
> +a page will cause no race conditions is enough.
> +
> +* page_pool_create(): Create a pool.
> +    * flags:      PP_FLAG_DMA_MAP, PP_FLAG_DMA_SYNC_DEV
> +    * order:      order^n pages on allocation

what is "n" above?
My quick reading of mm/page_alloc.c suggests that order is the power of 2
that should be used for the memory allocation... ???

> +    * pool_size:  size of the ptr_ring
> +    * nid:        preferred NUMA node for allocation
> +    * dev:        struct device. Used on DMA operations
> +    * dma_dir:    DMA direction
> +    * max_len:    max DMA sync memory size
> +    * offset:     DMA address offset
> +
...

> +
> +Coding examples
> +===============
> +
> +Registration
> +------------
> +
> +.. code-block:: c
> +
> +    /* Page pool registration */
> +    struct page_pool_params pp_params = { 0 };
> +    struct xdp_rxq_info xdp_rxq;
> +    int err;
> +
> +    pp_params.order = 0;

so 0^n?

> +    /* internal DMA mapping in page_pool */
> +    pp_params.flags = PP_FLAG_DMA_MAP;
> +    pp_params.pool_size = DESC_NUM;
> +    pp_params.nid = NUMA_NO_NODE;
> +    pp_params.dev = priv->dev;
> +    pp_params.dma_dir = xdp_prog ? DMA_BIDIRECTIONAL : DMA_FROM_DEVICE;
> +    page_pool = page_pool_create(&pp_params);
> +
> +    err = xdp_rxq_info_reg(&xdp_rxq, ndev, 0);
> +    if (err)
> +        goto err_out;
> +
> +    err = xdp_rxq_info_reg_mem_model(&xdp_rxq, MEM_TYPE_PAGE_POOL, page_pool);
> +    if (err)
> +        goto err_out;
> +
> +NAPI poller
> +-----------

thanks.
-- 
~Randy

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ