[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YkGjxox4gsJrWvJT@myrica>
Date: Mon, 28 Mar 2022 13:02:14 +0100
From: Jean-Philippe Brucker <jean-philippe@...aro.org>
To: Alexander Duyck <alexanderduyck@...com>
Cc: Yunsheng Lin <linyunsheng@...wei.com>,
"ilias.apalodimas@...aro.org" <ilias.apalodimas@...aro.org>,
"hawk@...nel.org" <hawk@...nel.org>,
"davem@...emloft.net" <davem@...emloft.net>,
"kuba@...nel.org" <kuba@...nel.org>,
"pabeni@...hat.com" <pabeni@...hat.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: Re: [PATCH net] skbuff: disable coalescing for page_pool recycling
On Fri, Mar 25, 2022 at 02:50:46AM +0000, Alexander Duyck wrote:
> > > The problem is here: both SKB1 and SKB2 point to PAGE2 but SKB1 does
> > > not actually hold a reference to PAGE2.
> >
> > it seems the SKB1 *does* hold a reference to PAGE2 by calling
> > __skb_frag_ref(), which increments the page->_refcount instead of
> > incrementing pp_frag_count, as skb_cloned(SKB3) is true and
> > __skb_frag_ref() does not handle page pool
> > case:
> >
> > INVALID URI REMOVED
> > rc1/source/net/core/skbuff.c*L5308__;Iw!!Bt8RZUm9aw!u944ZiA7uzBuFvccr
> > rtR1xvondLNnkMf5xzM8xbbkosow-v5t-XdZJd6bMsByMx2Kw$
>
> I'm confused here as well. I don't see a path where you can take ownership of the page without taking a reference.
>
> Specifically the skb_head_is_locked() won't let you steal the head if the skb is cloned. And then for the frags they have an additional reference taken if the skb is cloned.
>
> > Without coalescing, when
> > > releasing both SKB2 and SKB3, a single reference to PAGE2 would be
> > > dropped. Now when releasing SKB1 and SKB2, two references to PAGE2
> > > will be dropped, resulting in underflow.
> > >
> > > (3c) Drop SKB2:
> > >
> > > af_packet_rcv(SKB2)
> > > consume_skb(SKB2)
> > > skb_release_data(SKB2) // drops second dataref
> > > page_pool_return_skb_page(PAGE2) // drops one pp_frag_count
> > >
> > > SKB1 _____ PAGE1
> > > \____
> > > PAGE2
> > > /
> > > RX_BD3 _________/
> > >
> > > (4) Userspace calls recvmsg()
> > > Copies SKB1 and releases it. Since SKB3 was coalesced with SKB1, we
> > > release the SKB3 page as well:
> > >
> > > tcp_eat_recv_skb(SKB1)
> > > skb_release_data(SKB1)
> > > page_pool_return_skb_page(PAGE1)
> > > page_pool_return_skb_page(PAGE2) // drops second
> > pp_frag_count
> > >
> > > (5) PAGE2 is freed, but the third RX descriptor was still using it!
> > > In our case this causes IOMMU faults, but it would silently corrupt
> > > memory if the IOMMU was disabled.
>
> I think I see the problem. It is when you get into steps 4 and 5 that you are actually hitting the issue. When you coalesced the page you ended up switching the page from a page pool page to a reference counted page, but it is being stored in a page pool skb. That is the issue. Basically if the skb is a pp_recycle skb we should be incrementing the frag count, not the reference count.
> So essentially the logic should be that if to->pp_recycle is set but from is cloned then you need to return false. The problem isn't that they are both pp_recycle skbs, it is that the from was cloned and we are trying to merge that into a pp_recycle skb by adding to the reference count of the pages.
I agree with this, the problem is switching from a page_pool frag refcount
to a page refcount. I suppose we could change __skb_frag_ref() to increase
the pp_frag_count but that's probably best left as future improvement, I
don't want to break more than I fix here. I'll send a v2 with a check on
(cloned(from) && from->pp_recycle)
Thanks,
Jean
>
> > > A proper implementation would probably take another reference from the
> > > page_pool at step (3b), but that seems too complicated for a fix. Keep
> > > it simple for now, prevent coalescing for page_pool users.
Powered by blists - more mailing lists