lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKhg4tKXfos+M=rmu25B=dCmS_uzmBy743BB=6NBZgBMWnHobA@mail.gmail.com>
Date: Mon, 11 Dec 2023 11:31:06 +0800
From: Liang Chen <liangchen.linux@...il.com>
To: Jakub Kicinski <kuba@...nel.org>
Cc: davem@...emloft.net, edumazet@...gle.com, pabeni@...hat.com, 
	hawk@...nel.org, ilias.apalodimas@...aro.org, linyunsheng@...wei.com, 
	netdev@...r.kernel.org, linux-mm@...ck.org, jasowang@...hat.com
Subject: Re: [PATCH net-next v7 1/4] page_pool: transition to reference count
 management after page draining

On Sat, Dec 9, 2023 at 9:38 AM Jakub Kicinski <kuba@...nel.org> wrote:
>
> On Wed,  6 Dec 2023 18:54:16 +0800 Liang Chen wrote:
> > -/* pp_frag_count represents the number of writers who can update the page
> > +/* pp_ref_count represents the number of writers who can update the page
> >   * either by updating skb->data or via DMA mappings for the device.
> >   * We can't rely on the page refcnt for that as we don't know who might be
> >   * holding page references and we can't reliably destroy or sync DMA mappings
> >   * of the fragments.
> >   *
> > - * When pp_frag_count reaches 0 we can either recycle the page if the page
> > + * pp_ref_count initially corresponds to the number of fragments. However,
> > + * when multiple users start to reference a single fragment, for example in
> > + * skb_try_coalesce, the pp_ref_count will become greater than the number of
> > + * fragments.
> > + *
> > + * When pp_ref_count reaches 0 we can either recycle the page if the page
> >   * refcnt is 1 or return it back to the memory allocator and destroy any
> >   * mappings we have.
> >   */
>
> Sorry to nit pick but I think this whole doc has to be rewritten
> completely. It does state the most important thing which is that
> the caller must have just allocated the page.
>
> How about:
>
> /**
>  * page_pool_fragment_page() - split a fresh page into fragments
>  * @.. fill these in
>  *
>  * pp_ref_count represents the number of outstanding references
>  * to the page, which will be freed using page_pool APIs (rather
>  * than page allocator APIs like put_page()). Such references are
>  * usually held by page_pool-aware objects like skbs marked for
>  * page pool recycling.
>  *
>  * This helper allows the caller to take (set) multiple references
>  * to a freshly allocated page. The page must be freshly allocated
>  * (have a pp_ref_count of 1). This is commonly done by drivers
>  * and "fragment allocators" to save atomic operations - either
>  * when they know upfront how many references they will need; or
>  * to take MAX references and return the unused ones with a single
>  * atomic dec(), instead of performing multiple atomic inc()
>  * operations.
>  */
>
> I think that's more informative at this stage of evolution of
> the  page pool API, when most users aren't experts on internals.
> But feel free to disagree..
>

Thanks for the help! This is certainly better.

> >  static inline void page_pool_fragment_page(struct page *page, long nr)
> >  {
> > -     atomic_long_set(&page->pp_frag_count, nr);
> > +     atomic_long_set(&page->pp_ref_count, nr);
> >  }
>
> The code itself and rest of the patches LGTM, although it would be
> great to get ACKs from pp maintainers..

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ