[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YOcKASZ9Bp0/cz1d@enceladus>
Date: Thu, 8 Jul 2021 17:21:53 +0300
From: Ilias Apalodimas <ilias.apalodimas@...aro.org>
To: Alexander Duyck <alexander.duyck@...il.com>
Cc: Yunsheng Lin <linyunsheng@...wei.com>,
David Miller <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>, linuxarm@...neuler.org,
yisen.zhuang@...wei.com, Salil Mehta <salil.mehta@...wei.com>,
thomas.petazzoni@...tlin.com, Marcin Wojtas <mw@...ihalf.com>,
Russell King - ARM Linux <linux@...linux.org.uk>,
hawk@...nel.org, Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
John Fastabend <john.fastabend@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Peter Zijlstra <peterz@...radead.org>,
Will Deacon <will@...nel.org>,
Matthew Wilcox <willy@...radead.org>,
Vlastimil Babka <vbabka@...e.cz>, fenghua.yu@...el.com,
guro@...com, peterx@...hat.com, Feng Tang <feng.tang@...el.com>,
Jason Gunthorpe <jgg@...pe.ca>, mcroce@...rosoft.com,
Hugh Dickins <hughd@...gle.com>,
Jonathan Lemon <jonathan.lemon@...il.com>,
Alexander Lobakin <alobakin@...me>,
Willem de Bruijn <willemb@...gle.com>, wenxu@...oud.cn,
cong.wang@...edance.com, Kevin Hao <haokexin@...il.com>,
nogikh@...gle.com, Marco Elver <elver@...gle.com>,
Netdev <netdev@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>, bpf <bpf@...r.kernel.org>
Subject: Re: [PATCH net-next RFC 1/2] page_pool: add page recycling support
based on elevated refcnt
> > > >
[...]
> > > > The above expectation is based on that the last user will always
> > > > call page_pool_put_full_page() in order to do the recycling or do
> > > > the resource cleanup(dma unmaping..etc).
> > > >
> > > > As the skb_free_head() and skb_release_data() have both checked the
> > > > skb->pp_recycle to call the page_pool_put_full_page() if needed, I
> > > > think we are safe for most case, the one case I am not so sure above
> > > > is the rx zero copy, which seems to also bump up the refcnt before
> > > > mapping the page to user space, we might need to ensure rx zero copy
> > > > is not the last user of the page or if it is the last user, make sure
> > > > it calls page_pool_put_full_page() too.
> > >
> > > Yes, but the skb->pp_recycle value is per skb, not per page. So my
> > > concern is that carrying around that value can be problematic as there
> > > are a number of possible cases where the pages might be
> > > unintentionally recycled. All it would take is for a packet to get
> > > cloned a few times and then somebody starts using pskb_expand_head and
> > > you would have multiple cases, possibly simultaneously, of entities
> > > trying to free the page. I just worry it opens us up to a number of
> > > possible races.
> >
> > Maybe I missde something, but I thought the cloned SKBs would never trigger
> > the recycling path, since they are protected by the atomic dataref check in
> > skb_release_data(). What am I missing?
>
> Are you talking about the head frag? So normally a clone wouldn't
> cause an issue because the head isn't changed. In the case of the
> head_frag we should be safe since pskb_expand_head will just kmalloc
> the new head and clears head_frag so it won't trigger
> page_pool_return_skb_page on the head_frag since the dataref just goes
> from 2 to 1.
>
> The problem is that pskb_expand_head memcopies the page frags over and
> takes a reference on the pages. At that point you would have two skbs
> both pointing to the same set of pages and each one ready to call
> page_pool_return_skb_page on the pages at any time and possibly racing
> with the other.
Ok let me make sure I get the idea properly.
When pskb_expand_head is called, the new dataref will be 1, but the
head_frag will be set to 0, in which case the recycling code won't be
called for that skb.
So you are mostly worried about a race within the context of
pskb_expand_skb() between copying the frags, releasing the previous head
and preparing the new one (on a cloned skb)?
>
> I suspect if they both called it at roughly the same time one of them
> would trigger a NULL pointer dereference since they would both check
> pp_magic first, and then both set pp to NULL. If run on a system where
> dma_unmap_page_attrs takes a while it would be very likely to race
> since pp_magic doesn't get cleared until after the page is unmapped.
Thanks!
/Ilias
Powered by blists - more mailing lists