[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241220143854.7dce75e4@kernel.org>
Date: Fri, 20 Dec 2024 14:38:54 -0800
From: Jakub Kicinski <kuba@...nel.org>
To: David Wei <dw@...idwei.uk>
Cc: io-uring@...r.kernel.org, netdev@...r.kernel.org, Jens Axboe
<axboe@...nel.dk>, Pavel Begunkov <asml.silence@...il.com>, Paolo Abeni
<pabeni@...hat.com>, "David S. Miller" <davem@...emloft.net>, Eric Dumazet
<edumazet@...gle.com>, Jesper Dangaard Brouer <hawk@...nel.org>, David
Ahern <dsahern@...nel.org>, Mina Almasry <almasrymina@...gle.com>,
Stanislav Fomichev <stfomichev@...il.com>, Joe Damato <jdamato@...tly.com>,
Pedro Tammela <pctammela@...atatu.com>
Subject: Re: [PATCH net-next v9 14/20] io_uring/zcrx: dma-map area for the
device
On Tue, 17 Dec 2024 16:37:40 -0800 David Wei wrote:
> diff --git a/include/uapi/linux/netdev.h b/include/uapi/linux/netdev.h
> index e4be227d3ad6..13d810a28ed6 100644
> --- a/include/uapi/linux/netdev.h
> +++ b/include/uapi/linux/netdev.h
The top of this file says:
/* Do not edit directly, auto-generated from: */
/* Documentation/netlink/specs/netdev.yaml */
> +static void io_zcrx_refill_slow(struct page_pool *pp, struct io_zcrx_ifq *ifq)
> +{
> + struct io_zcrx_area *area = ifq->area;
> +
> + spin_lock_bh(&area->freelist_lock);
> + while (area->free_count && pp->alloc.count < PP_ALLOC_CACHE_REFILL) {
> + struct net_iov *niov = __io_zcrx_get_free_niov(area);
> + netmem_ref netmem = net_iov_to_netmem(niov);
> +
> + page_pool_set_pp_info(pp, netmem);
> + page_pool_mp_return_in_cache(pp, netmem);
>
> + pp->pages_state_hold_cnt++;
But the kdoc on page_pool_mp_return_in_cache() says:
+ * Return already allocated and accounted netmem to the page pool's allocation
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> + trace_page_pool_state_hold(pp, netmem, pp->pages_state_hold_cnt);
> + }
> + spin_unlock_bh(&area->freelist_lock);
> +}
> + if (page_pool_unref_netmem(netmem, 1) == 0)
page_pool_unref_and_test()
> + io_zcrx_return_niov_freelist(netmem_to_net_iov(netmem));
> + return false;
> }
Powered by blists - more mailing lists