[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAMArcTVWM8uY3-pmn4Qoy4rujjxrEQXJoF2C9bAXNH9_OJFZMA@mail.gmail.com>
Date: Tue, 15 Apr 2025 12:29:07 +0900
From: Taehee Yoo <ap420073@...il.com>
To: Jakub Kicinski <kuba@...nel.org>
Cc: davem@...emloft.net, pabeni@...hat.com, edumazet@...gle.com,
andrew+netdev@...n.ch, horms@...nel.org, michael.chan@...adcom.com,
pavan.chebbi@...adcom.com, hawk@...nel.org, ilias.apalodimas@...aro.org,
netdev@...r.kernel.org, dw@...idwei.uk, kuniyu@...zon.com, sdf@...ichev.me,
ahmed.zaki@...el.com, aleksander.lobakin@...el.com,
hongguang.gao@...adcom.com, Mina Almasry <almasrymina@...gle.com>
Subject: Re: [PATCH v2 net-next] eth: bnxt: add support rx side device memory TCP
On Tue, Apr 15, 2025 at 7:47 AM Jakub Kicinski <kuba@...nel.org> wrote:
>
Hi Jakub,
Thanks a lot for your review!
> On Thu, 10 Apr 2025 07:43:51 +0000 Taehee Yoo wrote:
> > @@ -1251,27 +1269,41 @@ static u32 __bnxt_rx_agg_pages(struct bnxt *bp,
> > RX_AGG_CMP_LEN) >> RX_AGG_CMP_LEN_SHIFT;
> >
> > cons_rx_buf = &rxr->rx_agg_ring[cons];
> > - skb_frag_fill_page_desc(frag, cons_rx_buf->page,
> > - cons_rx_buf->offset, frag_len);
> > - shinfo->nr_frags = i + 1;
> > + if (skb) {
> > + skb_add_rx_frag_netmem(skb, i, cons_rx_buf->netmem,
> > + cons_rx_buf->offset,
> > + frag_len, BNXT_RX_PAGE_SIZE);
>
> I thought BNXT_RX_PAGE_SIZE is the max page size supported by HW.
> We currently only allocate order 0 pages/netmems, so the truesize
> calculation should use PAGE_SIZE, AFAIU?
Thanks for catching this! I will fix this in the v3 patch.
>
> > + } else {
> > + skb_frag_t *frag = &shinfo->frags[i];
> > +
> > + skb_frag_fill_netmem_desc(frag, cons_rx_buf->netmem,
> > + cons_rx_buf->offset,
> > + frag_len);
> > + shinfo->nr_frags = i + 1;
> > + }
> > __clear_bit(cons, rxr->rx_agg_bmap);
> >
> > - /* It is possible for bnxt_alloc_rx_page() to allocate
> > + /* It is possible for bnxt_alloc_rx_netmem() to allocate
> > * a sw_prod index that equals the cons index, so we
> > * need to clear the cons entry now.
> > */
> > - mapping = cons_rx_buf->mapping;
> > - page = cons_rx_buf->page;
> > - cons_rx_buf->page = NULL;
> > + netmem = cons_rx_buf->netmem;
> > + cons_rx_buf->netmem = 0;
> >
> > - if (xdp && page_is_pfmemalloc(page))
> > + if (xdp && netmem_is_pfmemalloc(netmem))
> > xdp_buff_set_frag_pfmemalloc(xdp);
> >
> > - if (bnxt_alloc_rx_page(bp, rxr, prod, GFP_ATOMIC) != 0) {
> > + if (bnxt_alloc_rx_netmem(bp, rxr, prod, GFP_ATOMIC) != 0) {
> > + if (skb) {
> > + skb->len -= frag_len;
> > + skb->data_len -= frag_len;
> > + skb->truesize -= BNXT_RX_PAGE_SIZE;
>
> and here.
I will fix this.
>
> > + }
>
> > +bool dev_is_mp_channel(struct net_device *dev, int i)
> > +{
> > + return !!dev->_rx[i].mp_params.mp_priv;
> > +}
> > +EXPORT_SYMBOL(dev_is_mp_channel);
>
> Sorry for a late comment but since you only use this helper after
> allocating the payload pool -- do you think we could make the helper
> operate on a page pool rather than device? I mean something like:
>
> bool page_pool_is_unreadable(pp)
> {
> return !!pp->mp_ops;
> }
>
> ? I could be wrong but I'm worried that we may migrate the mp
> settings to dev->cfg at some point, and then this helper will
> be ambiguous (current vs pending settings).
I agree with you.
This helper is ambiguous for getting mp_priv.
The mp_priv metadata is page_pool's metadata, so a page_pool-based
helper should be needed instead of a device-based helper.
I will change it in the v3 patch.
>
> The dev_is_mp_channel() -> page_pool_is_readable() refactor is up to
> you, but I think the truesize needs to be fixed.
Thanks a lot!
Taehee Yoo
> --
> pw-bot: cr
Powered by blists - more mailing lists