lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210503092937.78a1eb05@carbon>
Date:   Mon, 3 May 2021 09:29:37 +0200
From:   Jesper Dangaard Brouer <brouer@...hat.com>
To:     Ilias Apalodimas <ilias.apalodimas@...aro.org>
Cc:     brouer@...hat.com, Yunsheng Lin <linyunsheng@...wei.com>,
        Matteo Croce <mcroce@...ux.microsoft.com>,
        Networking <netdev@...r.kernel.org>,
        Linux-MM <linux-mm@...ck.org>,
        Ayush Sawal <ayush.sawal@...lsio.com>,
        Vinay Kumar Yadav <vinay.yadav@...lsio.com>,
        Rohit Maheshwari <rohitm@...lsio.com>,
        "David S. Miller" <davem@...emloft.net>,
        Jakub Kicinski <kuba@...nel.org>,
        Thomas Petazzoni <thomas.petazzoni@...tlin.com>,
        Marcin Wojtas <mw@...ihalf.com>,
        Russell King <linux@...linux.org.uk>,
        Mirko Lindner <mlindner@...vell.com>,
        Stephen Hemminger <stephen@...workplumber.org>,
        Tariq Toukan <tariqt@...dia.com>,
        Jesper Dangaard Brouer <hawk@...nel.org>,
        Alexei Starovoitov <ast@...nel.org>,
        Daniel Borkmann <daniel@...earbox.net>,
        John Fastabend <john.fastabend@...il.com>,
        Boris Pismenny <borisp@...dia.com>,
        Arnd Bergmann <arnd@...db.de>,
        Andrew Morton <akpm@...ux-foundation.org>,
        "Peter Zijlstra (Intel)" <peterz@...radead.org>,
        Vlastimil Babka <vbabka@...e.cz>, Yu Zhao <yuzhao@...gle.com>,
        Will Deacon <will@...nel.org>,
        Fenghua Yu <fenghua.yu@...el.com>,
        Roman Gushchin <guro@...com>, Hugh Dickins <hughd@...gle.com>,
        Peter Xu <peterx@...hat.com>, Jason Gunthorpe <jgg@...pe.ca>,
        Jonathan Lemon <jonathan.lemon@...il.com>,
        Alexander Lobakin <alobakin@...me>,
        Cong Wang <cong.wang@...edance.com>, wenxu <wenxu@...oud.cn>,
        Kevin Hao <haokexin@...il.com>,
        Jakub Sitnicki <jakub@...udflare.com>,
        Marco Elver <elver@...gle.com>,
        Willem de Bruijn <willemb@...gle.com>,
        Miaohe Lin <linmiaohe@...wei.com>,
        Guillaume Nault <gnault@...hat.com>,
        open list <linux-kernel@...r.kernel.org>,
        linux-rdma@...r.kernel.org, bpf <bpf@...r.kernel.org>,
        Matthew Wilcox <willy@...radead.org>,
        Eric Dumazet <edumazet@...gle.com>,
        David Ahern <dsahern@...il.com>,
        Lorenzo Bianconi <lorenzo@...nel.org>,
        Saeed Mahameed <saeedm@...dia.com>,
        Andrew Lunn <andrew@...n.ch>, Paolo Abeni <pabeni@...hat.com>
Subject: Re: [PATCH net-next v3 0/5] page_pool: recycle buffers

On Fri, 30 Apr 2021 20:32:07 +0300
Ilias Apalodimas <ilias.apalodimas@...aro.org> wrote:

> (-cc invalid emails)
> Replying to my self here but....
> 
> [...]
> > > >
> > > > We can't do that. The reason we need those structs is that we rely on the
> > > > existing XDP code, which already recycles it's buffers, to enable
> > > > recycling.  Since we allocate a page per packet when using page_pool for a
> > > > driver , the same ideas apply to an SKB and XDP frame. We just recycle the  
> > >
> > > I am not really familar with XDP here, but a packet from hw is either a
> > > "struct xdp_frame/xdp_buff" for XDP or a "struct sk_buff" for TCP/IP stack,
> > > a packet can not be both "struct xdp_frame/xdp_buff" and "struct sk_buff" at
> > > the same time, right?
> > >  
> >
> > Yes, but the payload is irrelevant in both cases and that's what we use
> > page_pool for.  You can't use this patchset unless your driver usues
> > build_skb().  So in both cases you just allocate memory for the payload and
> > decide what the wrap the buffer with (XDP or SKB) later.
> >  
> > > What does not really make sense to me is that the page has to be from page
> > > pool when a skb's frag page can be recycled, right? If it is ture, the switch
> > > case in __xdp_return() does not really make sense for skb recycling, why go
> > > all the trouble of checking the mem->type and mem->id to find the page_pool
> > > pointer when recyclable page for skb can only be from page pool?  
> >
> > In any case you need to find in which pool the buffer you try to recycle
> > belongs.  In order to make the whole idea generic and be able to recycle skb
> > fragments instead of just the skb head you need to store some information on
> > struct page.  That's the fundamental difference of this patchset compared to
> > the RFC we sent a few years back [1] which was just storing information on the
> > skb.  The way this is done on the current patchset is that we store the
> > struct xdp_mem_info in page->private and then look it up on xdp_return().
> >
> > Now that being said Matthew recently reworked struct page, so we could see if
> > we can store the page pool pointer directly instead of the struct
> > xdp_mem_info. That would allow us to call into page pool functions directly.
> > But we'll have to agree if that makes sense to go into struct page to begin
> > with and make sure the pointer is still valid when we take the recycling path.
> >  
> 
> Thinking more about it the reason that prevented us from storing a
> page pool pointer directly is not there anymore. Jesper fixed that
> already a while back. So we might as well store the page_pool ptr in
> page->private and call into the functions directly.  I'll have a look
> before v4.

I want to give credit to Jonathan Lemon whom came up with the idea of
storing the page_pool object that "owns" the page directly in struct
page.  I see this as an optimization that we can add later, so it
doesn't block this patchset.  As Ilias mention, it required some
work/changes[1]+[2] to guarantee that the page_pool object life-time
were longer than all the outstanding in-flight page-objects, but that
have been stable for some/many kernel releases now.  This is already
need/used for making sure the DMA-mappings can be safely released[1],
but I on-purpose enabled the same in-flight tracking for page_pool
users that doesn't use the DMA-mapping feature (making sure the code is
exercised).


[1] 99c07c43c4ea ("xdp: tracking page_pool resources and safe removal")
[2] c3f812cea0d7 ("page_pool: do not release pool until inflight == 0.")
-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ