[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKhg4tJDgaVeMp437q1BHuE3aZo2NU4JnOhaQEXepJuQhPnTZQ@mail.gmail.com>
Date: Thu, 14 Dec 2023 10:26:47 +0800
From: Liang Chen <liangchen.linux@...il.com>
To: Ilias Apalodimas <ilias.apalodimas@...aro.org>
Cc: Jakub Kicinski <kuba@...nel.org>, davem@...emloft.net, edumazet@...gle.com,
pabeni@...hat.com, hawk@...nel.org, linyunsheng@...wei.com,
netdev@...r.kernel.org, linux-mm@...ck.org, jasowang@...hat.com,
almasrymina@...gle.com
Subject: Re: [PATCH net-next v8 4/4] skbuff: Optimization of SKB coalescing
for page pool
On Wed, Dec 13, 2023 at 3:10 PM Ilias Apalodimas
<ilias.apalodimas@...aro.org> wrote:
>
> Hi Jakub,
>
> On Mon, 11 Dec 2023 at 22:14, Jakub Kicinski <kuba@...nel.org> wrote:
> >
> > On Mon, 11 Dec 2023 09:46:55 +0200 Ilias Apalodimas wrote:
> > > As I said in the past the patch look correct. I don't like the fact
> > > that more pp internals creep into the default network stack, but
> > > perhaps this is fine with the bigger adoption?
> > > Jakub any thoughts/objections?
> >
> > Now that you asked... the helper does seem to be in sort of
> > a in-between state of being skb specific.
> >
> > What worries me is that this:
> >
> > +/**
> > + * skb_pp_frag_ref() - Increase fragment reference count of a page
> > + * @page: page of the fragment on which to increase a reference
> > + *
> > + * Increase fragment reference count (pp_ref_count) on a page, but if it is
> > + * not a page pool page, fallback to increase a reference(_refcount) on a
> > + * normal page.
> > + */
> > +static void skb_pp_frag_ref(struct page *page)
> > +{
> > + struct page *head_page = compound_head(page);
> > +
> > + if (likely(is_pp_page(head_page)))
> > + page_pool_ref_page(head_page);
> > + else
> > + page_ref_inc(head_page);
> > +}
> >
> > doesn't even document that the caller must make sure that the skb
> > which owns this page is marked for pp recycling. The caller added
> > by this patch does that, but we should indicate somewhere that doing
> > skb_pp_frag_ref() for frag in a non-pp-recycling skb is not correct.
>
> Correct
>
> >
> > We can either lean in the direction of making it less skb specific,
> > put the code in page_pool.c / helpers.h and make it clear that the
> > caller has to be careful.
> > Or we make it more skb specific, take a skb pointer as arg, and also
> > look at its recycling marking..
> > or just improve the kdoc.
>
> I've mentioned this in the past, but I generally try to prevent people
> from shooting themselves in the foot when creating APIs. Unless
> there's a proven performance hit, I'd move the pp_recycle checking in
> skb_pp_frag_ref().
>
/**
* skb_pp_frag_ref() - Increase fragment references of a page pool aware skb
* @skb: page pool aware skb
*
* Increase the fragment reference count (pp_ref_count) of a skb. This is
* intended to gain fragment references only for page pool aware skbs,
* i.e. when skb->pp_recycle is true, and not for fragments in a
* non-pp-recycling skb. It has a fallback to increase references on normal
* pages, as page pool aware skbs may also have normal page fragments.
*/
Sure. Below is a snippet of the implementation for skb_pp_frag_ref,
which takes an skb as its argument. The loop that iterates through the
frags has been moved inside the function to avoid checking
skb->pp_recycle each time a frag reference is taken(though there would
be some optimization from the compiler). If there is no objection, it
will be included in v10. Thanks!
static int skb_pp_frag_ref(struct sk_buff *skb)
{
struct skb_shared_info *shinfo;
struct page *head_page;
int i;
if (!skb->pp_recycle)
return -EINVAL;
shinfo = skb_shinfo(skb);
for (i = 0; i < shinfo->nr_frags; i++){
head_page = compound_head(skb_frag_page(&shinfo->frags[i]));
if (likely(is_pp_page(head_page)))
page_pool_ref_page(head_page);
else
page_ref_inc(head_page);
}
return 0;
}
/* if the skb is not cloned this does nothing
* since we set nr_frags to 0.
*/
if (skb_pp_frag_ref(from)) {
for (i = 0; i < from_shinfo->nr_frags; i++)
__skb_frag_ref(&from_shinfo->frags[i]);
}
> Thanks
> /Ilias
>
> /Ilias
Powered by blists - more mailing lists