[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20240911175153.1a84a28b@kernel.org>
Date: Wed, 11 Sep 2024 17:51:53 -0700
From: Jakub Kicinski <kuba@...nel.org>
To: Praveen Kaligineedi <pkaligineedi@...gle.com>
Cc: netdev@...r.kernel.org, davem@...emloft.net, edumazet@...gle.com,
pabeni@...hat.com, willemb@...gle.com, jeroendb@...gle.com,
shailend@...gle.com, hramamurthy@...gle.com, ziweixiao@...gle.com
Subject: Re: [PATCH net-next 2/2] gve: adopt page pool for DQ RDA mode
On Tue, 10 Sep 2024 10:53:15 -0700 Praveen Kaligineedi wrote:
> +static int gve_alloc_from_page_pool(struct gve_rx_ring *rx, struct gve_rx_buf_state_dqo *buf_state)
> +{
> + struct gve_priv *priv = rx->gve;
> + struct page *page;
> +
> + buf_state->page_info.buf_size = priv->data_buffer_size_dqo;
> + page = page_pool_alloc(rx->dqo.page_pool, &buf_state->page_info.page_offset,
> + &buf_state->page_info.buf_size, GFP_ATOMIC);
> +
> + if (!page) {
> + priv->page_alloc_fail++;
Is this counter global to the device? No locking or atomicity needed?
> +struct page_pool *gve_rx_create_page_pool(struct gve_priv *priv, struct gve_rx_ring *rx)
> +{
> + struct page_pool_params pp = {
> + .flags = PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV,
> + .order = 0,
> + .pool_size = GVE_PAGE_POOL_SIZE_MULTIPLIER * priv->rx_desc_cnt,
> + .dev = &priv->pdev->dev,
> + .netdev = priv->dev,
> + .max_len = PAGE_SIZE,
> + .dma_dir = DMA_FROM_DEVICE,
Can the allocation run from process context in parallel with the NAPI
that uses the pool? It's uncommon for drivers to do that. If not you
can set the NAPI pointer here and get lock-free recycling.
> + };
> +
> + return page_pool_create(&pp);
> +}
Could you make sure to wrap the new code at 80 chars?
./scripts/checkpatch.pl --strict --max-line-length=80
--
pw-bot: cr
Powered by blists - more mailing lists