[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250414154716.67412a8d@kernel.org>
Date: Mon, 14 Apr 2025 15:47:16 -0700
From: Jakub Kicinski <kuba@...nel.org>
To: Taehee Yoo <ap420073@...il.com>
Cc: davem@...emloft.net, pabeni@...hat.com, edumazet@...gle.com,
andrew+netdev@...n.ch, horms@...nel.org, michael.chan@...adcom.com,
pavan.chebbi@...adcom.com, hawk@...nel.org, ilias.apalodimas@...aro.org,
netdev@...r.kernel.org, dw@...idwei.uk, kuniyu@...zon.com, sdf@...ichev.me,
ahmed.zaki@...el.com, aleksander.lobakin@...el.com,
hongguang.gao@...adcom.com, Mina Almasry <almasrymina@...gle.com>
Subject: Re: [PATCH v2 net-next] eth: bnxt: add support rx side device
memory TCP
On Thu, 10 Apr 2025 07:43:51 +0000 Taehee Yoo wrote:
> @@ -1251,27 +1269,41 @@ static u32 __bnxt_rx_agg_pages(struct bnxt *bp,
> RX_AGG_CMP_LEN) >> RX_AGG_CMP_LEN_SHIFT;
>
> cons_rx_buf = &rxr->rx_agg_ring[cons];
> - skb_frag_fill_page_desc(frag, cons_rx_buf->page,
> - cons_rx_buf->offset, frag_len);
> - shinfo->nr_frags = i + 1;
> + if (skb) {
> + skb_add_rx_frag_netmem(skb, i, cons_rx_buf->netmem,
> + cons_rx_buf->offset,
> + frag_len, BNXT_RX_PAGE_SIZE);
I thought BNXT_RX_PAGE_SIZE is the max page size supported by HW.
We currently only allocate order 0 pages/netmems, so the truesize
calculation should use PAGE_SIZE, AFAIU?
> + } else {
> + skb_frag_t *frag = &shinfo->frags[i];
> +
> + skb_frag_fill_netmem_desc(frag, cons_rx_buf->netmem,
> + cons_rx_buf->offset,
> + frag_len);
> + shinfo->nr_frags = i + 1;
> + }
> __clear_bit(cons, rxr->rx_agg_bmap);
>
> - /* It is possible for bnxt_alloc_rx_page() to allocate
> + /* It is possible for bnxt_alloc_rx_netmem() to allocate
> * a sw_prod index that equals the cons index, so we
> * need to clear the cons entry now.
> */
> - mapping = cons_rx_buf->mapping;
> - page = cons_rx_buf->page;
> - cons_rx_buf->page = NULL;
> + netmem = cons_rx_buf->netmem;
> + cons_rx_buf->netmem = 0;
>
> - if (xdp && page_is_pfmemalloc(page))
> + if (xdp && netmem_is_pfmemalloc(netmem))
> xdp_buff_set_frag_pfmemalloc(xdp);
>
> - if (bnxt_alloc_rx_page(bp, rxr, prod, GFP_ATOMIC) != 0) {
> + if (bnxt_alloc_rx_netmem(bp, rxr, prod, GFP_ATOMIC) != 0) {
> + if (skb) {
> + skb->len -= frag_len;
> + skb->data_len -= frag_len;
> + skb->truesize -= BNXT_RX_PAGE_SIZE;
and here.
> + }
> +bool dev_is_mp_channel(struct net_device *dev, int i)
> +{
> + return !!dev->_rx[i].mp_params.mp_priv;
> +}
> +EXPORT_SYMBOL(dev_is_mp_channel);
Sorry for a late comment but since you only use this helper after
allocating the payload pool -- do you think we could make the helper
operate on a page pool rather than device? I mean something like:
bool page_pool_is_unreadable(pp)
{
return !!pp->mp_ops;
}
? I could be wrong but I'm worried that we may migrate the mp
settings to dev->cfg at some point, and then this helper will
be ambiguous (current vs pending settings).
The dev_is_mp_channel() -> page_pool_is_readable() refactor is up to
you, but I think the truesize needs to be fixed.
--
pw-bot: cr
Powered by blists - more mailing lists