[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241209192959.42425232@kernel.org>
Date: Mon, 9 Dec 2024 19:29:59 -0800
From: Jakub Kicinski <kuba@...nel.org>
To: David Wei <dw@...idwei.uk>
Cc: io-uring@...r.kernel.org, netdev@...r.kernel.org, Jens Axboe
<axboe@...nel.dk>, Pavel Begunkov <asml.silence@...il.com>, Paolo Abeni
<pabeni@...hat.com>, "David S. Miller" <davem@...emloft.net>, Eric Dumazet
<edumazet@...gle.com>, Jesper Dangaard Brouer <hawk@...nel.org>, David
Ahern <dsahern@...nel.org>, Mina Almasry <almasrymina@...gle.com>,
Stanislav Fomichev <stfomichev@...il.com>, Joe Damato <jdamato@...tly.com>,
Pedro Tammela <pctammela@...atatu.com>
Subject: Re: [PATCH net-next v8 06/17] net: page pool: add helper creating
area from pages
On Wed, 4 Dec 2024 09:21:45 -0800 David Wei wrote:
> From: Pavel Begunkov <asml.silence@...il.com>
>
> Add a helper that takes an array of pages and initialises passed in
> memory provider's area with them, where each net_iov takes one page.
> It's also responsible for setting up dma mappings.
>
> We keep it in page_pool.c not to leak netmem details to outside
> providers like io_uring, which don't have access to netmem_priv.h
> and other private helpers.
User space will likely give us hugepages. Feels a bit wasteful to map
and manage them 4k at a time. But okay, we can optimize this later.
> diff --git a/include/net/page_pool/memory_provider.h b/include/net/page_pool/memory_provider.h
> new file mode 100644
> index 000000000000..83d7eec0058d
> --- /dev/null
> +++ b/include/net/page_pool/memory_provider.h
> @@ -0,0 +1,10 @@
nit: missing SPDX
> +#ifndef _NET_PAGE_POOL_MEMORY_PROVIDER_H
> +#define _NET_PAGE_POOL_MEMORY_PROVIDER_H
> +
> +int page_pool_mp_init_paged_area(struct page_pool *pool,
> + struct net_iov_area *area,
> + struct page **pages);
> +void page_pool_mp_release_area(struct page_pool *pool,
> + struct net_iov_area *area);
> +
> +#endif
> +static void page_pool_release_page_dma(struct page_pool *pool,
> + netmem_ref netmem)
> +{
> + __page_pool_release_page_dma(pool, netmem);
I'm guessing this is to save text? Because
__page_pool_release_page_dma() is always_inline?
Maybe add a comment?
> +}
> +
> +int page_pool_mp_init_paged_area(struct page_pool *pool,
> + struct net_iov_area *area,
> + struct page **pages)
> +{
> + struct net_iov *niov;
> + netmem_ref netmem;
> + int i, ret = 0;
> +
> + if (!pool->dma_map)
> + return -EOPNOTSUPP;
> +
> + for (i = 0; i < area->num_niovs; i++) {
> + niov = &area->niovs[i];
> + netmem = net_iov_to_netmem(niov);
> +
> + page_pool_set_pp_info(pool, netmem);
Maybe move setting pp down, after we successfully mapped. Technically
it's not a bug to leave it set on netmem, but it would be on a page
struct.
> + if (!page_pool_dma_map_page(pool, netmem, pages[i])) {
> + ret = -EINVAL;
> + goto err_unmap_dma;
> + }
> + }
> + return 0;
> +
> +err_unmap_dma:
> + while (i--) {
> + netmem = net_iov_to_netmem(&area->niovs[i]);
> + page_pool_release_page_dma(pool, netmem);
> + }
> + return ret;
> +}
> +
> +void page_pool_mp_release_area(struct page_pool *pool,
> + struct net_iov_area *area)
> +{
> + int i;
> +
> + if (!pool->dma_map)
> + return;
> +
> + for (i = 0; i < area->num_niovs; i++) {
> + struct net_iov *niov = &area->niovs[i];
> +
> + page_pool_release_page_dma(pool, net_iov_to_netmem(niov));
> + }
> +}
Powered by blists - more mailing lists