lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Thu, 13 Jun 2024 07:18:23 -0700
From: Mina Almasry <almasrymina@...gle.com>
To: Paul Barker <paul.barker.ct@...renesas.com>
Cc: netdev@...r.kernel.org, linux-kernel@...r.kernel.org, 
	linux-doc@...r.kernel.org, linux-alpha@...r.kernel.org, 
	linux-mips@...r.kernel.org, linux-parisc@...r.kernel.org, 
	sparclinux@...r.kernel.org, linux-renesas-soc@...r.kernel.org, 
	linux-trace-kernel@...r.kernel.org, linux-arch@...r.kernel.org, 
	bpf@...r.kernel.org, linux-kselftest@...r.kernel.org, 
	linux-media@...r.kernel.org, dri-devel@...ts.freedesktop.org, 
	"David S. Miller" <davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>, 
	Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>, 
	Donald Hunter <donald.hunter@...il.com>, Jonathan Corbet <corbet@....net>, 
	Richard Henderson <richard.henderson@...aro.org>, Ivan Kokshaysky <ink@...assic.park.msu.ru>, 
	Matt Turner <mattst88@...il.com>, Thomas Bogendoerfer <tsbogend@...ha.franken.de>, 
	"James E.J. Bottomley" <James.Bottomley@...senpartnership.com>, Helge Deller <deller@....de>, 
	Andreas Larsson <andreas@...sler.com>, Sergey Shtylyov <s.shtylyov@....ru>, 
	Jesper Dangaard Brouer <hawk@...nel.org>, Ilias Apalodimas <ilias.apalodimas@...aro.org>, 
	Steven Rostedt <rostedt@...dmis.org>, Masami Hiramatsu <mhiramat@...nel.org>, 
	Mathieu Desnoyers <mathieu.desnoyers@...icios.com>, Arnd Bergmann <arnd@...db.de>, 
	Alexei Starovoitov <ast@...nel.org>, Daniel Borkmann <daniel@...earbox.net>, 
	Andrii Nakryiko <andrii@...nel.org>, Martin KaFai Lau <martin.lau@...ux.dev>, 
	Eduard Zingerman <eddyz87@...il.com>, Song Liu <song@...nel.org>, 
	Yonghong Song <yonghong.song@...ux.dev>, John Fastabend <john.fastabend@...il.com>, 
	KP Singh <kpsingh@...nel.org>, Stanislav Fomichev <sdf@...gle.com>, Hao Luo <haoluo@...gle.com>, 
	Jiri Olsa <jolsa@...nel.org>, Steffen Klassert <steffen.klassert@...unet.com>, 
	Herbert Xu <herbert@...dor.apana.org.au>, David Ahern <dsahern@...nel.org>, 
	Willem de Bruijn <willemdebruijn.kernel@...il.com>, Shuah Khan <shuah@...nel.org>, 
	Sumit Semwal <sumit.semwal@...aro.org>, Christian König <christian.koenig@....com>, 
	Bagas Sanjaya <bagasdotme@...il.com>, Christoph Hellwig <hch@...radead.org>, 
	Nikolay Aleksandrov <razor@...ckwall.org>, Pavel Begunkov <asml.silence@...il.com>, David Wei <dw@...idwei.uk>, 
	Jason Gunthorpe <jgg@...pe.ca>, Yunsheng Lin <linyunsheng@...wei.com>, 
	Shailend Chand <shailend@...gle.com>, Harshitha Ramamurthy <hramamurthy@...gle.com>, 
	Shakeel Butt <shakeel.butt@...ux.dev>, Jeroen de Borst <jeroendb@...gle.com>, 
	Praveen Kaligineedi <pkaligineedi@...gle.com>, linux-mm@...ck.org, 
	Matthew Wilcox <willy@...radead.org>
Subject: Re: [PATCH net-next v12 05/13] page_pool: convert to use netmem

On Thu, Jun 13, 2024 at 1:36 AM Paul Barker
<paul.barker.ct@...renesas.com> wrote:
>
> On 13/06/2024 02:35, Mina Almasry wrote:
> > Abstrace the memory type from the page_pool so we can later add support
>
> s/Abstrace/Abstract/
>

Thanks, will do.

> > for new memory types. Convert the page_pool to use the new netmem type
> > abstraction, rather than use struct page directly.
> >
> > As of this patch the netmem type is a no-op abstraction: it's always a
> > struct page underneath. All the page pool internals are converted to
> > use struct netmem instead of struct page, and the page pool now exports
> > 2 APIs:
> >
> > 1. The existing struct page API.
> > 2. The new struct netmem API.
> >
> > Keeping the existing API is transitional; we do not want to refactor all
> > the current drivers using the page pool at once.
> >
> > The netmem abstraction is currently a no-op. The page_pool uses
> > page_to_netmem() to convert allocated pages to netmem, and uses
> > netmem_to_page() to convert the netmem back to pages to pass to mm APIs,
> >
> > Follow up patches to this series add non-paged netmem support to the
> > page_pool. This change is factored out on its own to limit the code
> > churn to this 1 patch, for ease of code review.
> >
> > Signed-off-by: Mina Almasry <almasrymina@...gle.com>
> >
> > ---
> >
> > v12:
> > - Fix allmodconfig build error. Very recently renesas/ravb_main.c added
> >   a dependency on page_pool that I missed in my rebase. The dependency
> >   calls page_pool_alloc() directly as it wants to set a custom gfp_mask,
> >   which is unique as all other drivers call a wrapper to that function.
> >   Fix it by adding netmem_to_page() in the driver.> - Fix printing netmem trace printing (Pavel).
> >
> > v11:
> > - Fix typing to remove sparse warning. (Paolo/Steven)
> >
> > v9:
> > - Fix sparse error (Simon).
> >
> > v8:
> > - Fix napi_pp_put_page() taking netmem instead of page to fix
> >   patch-by-patch build error.
> > - Add net/netmem.h include in this patch to fix patch-by-patch build
> >   error.
> >
> > v6:
> >
> > - Rebased on top of the merged netmem_ref type.
> >
> > Cc: linux-mm@...ck.org
> > Cc: Matthew Wilcox <willy@...radead.org>
> >
> > ---
> >  drivers/net/ethernet/renesas/ravb_main.c |   5 +-
> >  include/linux/skbuff_ref.h               |   4 +-
> >  include/net/netmem.h                     |  15 ++
> >  include/net/page_pool/helpers.h          | 120 ++++++---
> >  include/net/page_pool/types.h            |  14 +-
> >  include/trace/events/page_pool.h         |  30 +--
> >  net/bpf/test_run.c                       |   5 +-
> >  net/core/page_pool.c                     | 304 ++++++++++++-----------
> >  net/core/skbuff.c                        |   8 +-
> >  9 files changed, 305 insertions(+), 200 deletions(-)
> >
> > diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c
> > index c1546b916e4ef..093236ebfeecb 100644
> > --- a/drivers/net/ethernet/renesas/ravb_main.c
> > +++ b/drivers/net/ethernet/renesas/ravb_main.c
> > @@ -303,8 +303,9 @@ ravb_alloc_rx_buffer(struct net_device *ndev, int q, u32 entry, gfp_t gfp_mask,
> >
> >       rx_buff = &priv->rx_buffers[q][entry];
> >       size = info->rx_buffer_size;
> > -     rx_buff->page = page_pool_alloc(priv->rx_pool[q], &rx_buff->offset,
> > -                                     &size, gfp_mask);
> > +     rx_buff->page = netmem_to_page(page_pool_alloc(priv->rx_pool[q],
> > +                                                    &rx_buff->offset,
> > +                                                    &size, gfp_mask));
> >       if (unlikely(!rx_buff->page)) {
> >               /* We just set the data size to 0 for a failed mapping which
> >                * should prevent DMA from happening...
>
> [snip]
>
> >
> > -static inline struct page *page_pool_alloc(struct page_pool *pool,
> > -                                        unsigned int *offset,
> > -                                        unsigned int *size, gfp_t gfp)
> > +static inline netmem_ref page_pool_alloc(struct page_pool *pool,
> > +                                      unsigned int *offset,
> > +                                      unsigned int *size, gfp_t gfp)
> >  {
> >       unsigned int max_size = PAGE_SIZE << pool->p.order;
> > -     struct page *page;
> > +     netmem_ref netmem;
> >
> >       if ((*size << 1) > max_size) {
> >               *size = max_size;
> >               *offset = 0;
> > -             return page_pool_alloc_pages(pool, gfp);
> > +             return page_pool_alloc_netmem(pool, gfp);
> >       }
> >
> > -     page = page_pool_alloc_frag(pool, offset, *size, gfp);
> > -     if (unlikely(!page))
> > -             return NULL;
> > +     netmem = page_pool_alloc_frag_netmem(pool, offset, *size, gfp);
> > +     if (unlikely(!netmem))
> > +             return 0;
> >
> >       /* There is very likely not enough space for another fragment, so append
> >        * the remaining size to the current fragment to avoid truesize
> > @@ -140,7 +142,7 @@ static inline struct page *page_pool_alloc(struct page_pool *pool,
> >               pool->frag_offset = max_size;
> >       }
> >
> > -     return page;
> > +     return netmem;
> >  }
> >
> >  /**
> > @@ -154,7 +156,7 @@ static inline struct page *page_pool_alloc(struct page_pool *pool,
> >   * utilization and performance penalty.
> >   *
> >   * Return:
> > - * Return allocated page or page fragment, otherwise return NULL.
> > + * Return allocated page or page fragment, otherwise return 0.
> >   */
> >  static inline struct page *page_pool_dev_alloc(struct page_pool *pool,
> >                                              unsigned int *offset,
> > @@ -162,7 +164,7 @@ static inline struct page *page_pool_dev_alloc(struct page_pool *pool,
> >  {
> >       gfp_t gfp = (GFP_ATOMIC | __GFP_NOWARN);
> >
> > -     return page_pool_alloc(pool, offset, size, gfp);
> > +     return netmem_to_page(page_pool_alloc(pool, offset, size, gfp));
> >  }
>
> I find this API change confusing - why should page_pool_alloc() return a
> netmem_ref but page_pool_dev_alloc() return a struct page *?
>
> Is there any reason to change page_pool_alloc() anyway? It calls
> page_pool_alloc_pages() or page_pool_alloc_frag() as appropriate, both
> of which your patch already converts to wrappers around the appropriate
> _netmem() functions. In all instances where page_pool_alloc() is called
> in this patch, you wrap it with netmem_to_page() anyway, there are no
> calls to page_pool_alloc() added which actually want a netmem_ref.
>

The general gist is that the page_pool API is being converted to use
netmem_ref instead of page. The existing API, which uses struct page,
is kept around transitionally, but meant to be removed and everything
moved to netmem.

APIs that current drivers depend on, like page_pool_dev_alloc(), I've
kept as struct page and added netmem versions when needed. APIs that
had no external users, like page_pool_alloc(), I took the opportunity
to move them to netmem immediately. But you recently depended on that.

I thought page_pool_alloc() was an internal function to the page_pool
not meant to be called from drivers, but the documentation actually
mentions it. Seems like I need to keep it as page* function
transitionally as well. I'll look into making this change you
suggested, there is
no needed page_pool_alloc() caller at the moment.


--
Thanks,
Mina

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ