[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <bde7b5c39b19cbc6e32a92b94e731d26a8d47922.camel@redhat.com>
Date: Mon, 06 May 2024 11:20:55 +0200
From: Paolo Abeni <pabeni@...hat.com>
To: FUJITA Tomonori <fujita.tomonori@...il.com>, netdev@...r.kernel.org
Cc: andrew@...n.ch, kuba@...nel.org, jiri@...nulli.us, horms@...nel.org
Subject: Re: [PATCH net-next v4 4/6] net: tn40xx: add basic Rx handling
On Thu, 2024-05-02 at 08:05 +0900, FUJITA Tomonori wrote:
> +static struct tn40_rx_page *tn40_rx_page_alloc(struct tn40_priv *priv)
> +{
> + struct tn40_rx_page *rx_page = &priv->rx_page_table.rx_pages;
> + int page_size = priv->rx_page_table.page_size;
> + struct page *page;
> + gfp_t gfp_mask;
> + dma_addr_t dma;
> +
> + gfp_mask = GFP_ATOMIC | __GFP_NOWARN;
> + if (page_size > PAGE_SIZE)
> + gfp_mask |= __GFP_COMP;
> +
> + page = alloc_pages(gfp_mask, get_order(page_size));
> + if (likely(page)) {
Note that this allocation schema can be problematic when the NIC will
receive traffic from many different streams/connection: a single packet
can keep a full order 4 page in use leading to overall memory usage
much greater the what truesize will report.
See commit 3226b158e67c. Here the under-estimation could fair worse.
Drivers usually use order-0 or order-1 pages.
[...]
> +static void tn40_recycle_skb(struct tn40_priv *priv, struct tn40_rxd_desc *rxdd)
> +{
Minor nit: the function name is confusing, at it does recycle in
internal buffer, not a skbuff.
Cheers,
Paolo
Powered by blists - more mailing lists