lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKgT0Ucnd4Oia8xy2D65O04901+Rh6cepX-d2vK1+0_Of2NwoA@mail.gmail.com>
Date:   Thu, 8 Jul 2021 08:41:08 -0700
From:   Alexander Duyck <alexander.duyck@...il.com>
To:     Ilias Apalodimas <ilias.apalodimas@...aro.org>
Cc:     Yunsheng Lin <linyunsheng@...wei.com>,
        David Miller <davem@...emloft.net>,
        Jakub Kicinski <kuba@...nel.org>, linuxarm@...neuler.org,
        yisen.zhuang@...wei.com, Salil Mehta <salil.mehta@...wei.com>,
        thomas.petazzoni@...tlin.com, Marcin Wojtas <mw@...ihalf.com>,
        Russell King - ARM Linux <linux@...linux.org.uk>,
        hawk@...nel.org, Alexei Starovoitov <ast@...nel.org>,
        Daniel Borkmann <daniel@...earbox.net>,
        John Fastabend <john.fastabend@...il.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Peter Zijlstra <peterz@...radead.org>,
        Will Deacon <will@...nel.org>,
        Matthew Wilcox <willy@...radead.org>,
        Vlastimil Babka <vbabka@...e.cz>, fenghua.yu@...el.com,
        guro@...com, peterx@...hat.com, Feng Tang <feng.tang@...el.com>,
        Jason Gunthorpe <jgg@...pe.ca>, mcroce@...rosoft.com,
        Hugh Dickins <hughd@...gle.com>,
        Jonathan Lemon <jonathan.lemon@...il.com>,
        Alexander Lobakin <alobakin@...me>,
        Willem de Bruijn <willemb@...gle.com>, wenxu@...oud.cn,
        cong.wang@...edance.com, Kevin Hao <haokexin@...il.com>,
        nogikh@...gle.com, Marco Elver <elver@...gle.com>,
        Netdev <netdev@...r.kernel.org>,
        LKML <linux-kernel@...r.kernel.org>, bpf <bpf@...r.kernel.org>
Subject: Re: [PATCH net-next RFC 1/2] page_pool: add page recycling support
 based on elevated refcnt

On Thu, Jul 8, 2021 at 8:36 AM Ilias Apalodimas
<ilias.apalodimas@...aro.org> wrote:
>
> On Thu, Jul 08, 2021 at 08:29:56AM -0700, Alexander Duyck wrote:
> > On Thu, Jul 8, 2021 at 8:17 AM Ilias Apalodimas
> > <ilias.apalodimas@...aro.org> wrote:

<snip>

> > > What do you think about resetting pp_recycle bit on pskb_expand_head()?
> >
> > I assume you mean specifically in the cloned case?
> >
>
> Yes. Even if we do it unconditionally we'll just loose non-cloned buffers from
> the recycling.
> I'll send a patch later today.

If you do it unconditionally you could leak DMA mappings since in the
non-cloned case we don't bother with releasing the shared info since
we just did a memcpy of it without the reference count tweaks. We have
to be really careful here. The idea is that we have to make exactly
one call to the __page_pool_put_page function for this page.

> > > If my memory serves me right Eric wanted that from the beginning. Then the
> > > cloned/expanded SKB won't trigger the recycling.  If that skb hits the free
> > > path first, we'll end up recycling the fragments eventually.  If the
> > > original one goes first, we'll just unmap the page(s) and freeing the cloned
> > > one will free all the remaining buffers.
> >
> > I *think* that should be fine. Effectively what we are doing is making
> > it so that if the original skb is freed first the pages are released,
> > and if it is released after the clone/expended skb then it can be
> > recycled.
>
> Exactly
>
> >
> > The issue is we have to maintain it so that there will be exactly one
> > caller of the recycling function for the pages. So any spot where we
> > are updating skb->head we will have to see if there is a clone and if
> > so we have to clear the pp_recycle flag on our skb so that it doesn't
> > try to recycle the page frags as well.
>
> Correct. I'll keep looking around in case there's something less fragile we
> can do

That is the risk to this kind of thing. We have to make the call once
and only once and if we either miss it or call it too many times we
can introduce some serious issues.

Thanks.

- Alex

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ