lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAC_iWjKWf5cT-L7O6HwkWxGBeKajhqq2DTe1djxmPj04L3P5wg@mail.gmail.com>
Date: Thu, 9 Nov 2023 11:00:44 +0200
From: Ilias Apalodimas <ilias.apalodimas@...aro.org>
To: Jakub Kicinski <kuba@...nel.org>
Cc: davem@...emloft.net, netdev@...r.kernel.org, edumazet@...gle.com, 
	pabeni@...hat.com, almasrymina@...gle.com, hawk@...nel.org
Subject: Re: [PATCH net-next 02/15] net: page_pool: avoid touching slow on the fastpath

On Tue, 24 Oct 2023 at 19:02, Jakub Kicinski <kuba@...nel.org> wrote:
>
> To fully benefit from previous commit add one byte of state
> in the first cache line recording if we need to look at
> the slow part.
>
> The packing isn't all that impressive right now, we create
> a 7B hole. I'm expecting Olek's rework will reshuffle this,
> anyway.
>
> Acked-by: Jesper Dangaard Brouer <hawk@...nel.org>
> Acked-by: Ilias Apalodimas <ilias.apalodimas@...aro.org>
> Reviewed-by: Mina Almasry <almasrymina@...gle.com>
> Signed-off-by: Jakub Kicinski <kuba@...nel.org>
> ---
>  include/net/page_pool/types.h | 2 ++
>  net/core/page_pool.c          | 4 +++-
>  2 files changed, 5 insertions(+), 1 deletion(-)
>
> diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h
> index 23950fcc4eca..e1bb92c192de 100644
> --- a/include/net/page_pool/types.h
> +++ b/include/net/page_pool/types.h
> @@ -125,6 +125,8 @@ struct page_pool_stats {
>  struct page_pool {
>         struct page_pool_params_fast p;
>
> +       bool has_init_callback;
> +
>         long frag_users;
>         struct page *frag_page;
>         unsigned int frag_offset;
> diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> index 5cae413de7cc..08af9de8e8eb 100644
> --- a/net/core/page_pool.c
> +++ b/net/core/page_pool.c
> @@ -212,6 +212,8 @@ static int page_pool_init(struct page_pool *pool,
>                  */
>         }
>
> +       pool->has_init_callback = !!pool->slow.init_callback;
> +
>  #ifdef CONFIG_PAGE_POOL_STATS
>         pool->recycle_stats = alloc_percpu(struct page_pool_recycle_stats);
>         if (!pool->recycle_stats)
> @@ -385,7 +387,7 @@ static void page_pool_set_pp_info(struct page_pool *pool,
>          * the overhead is negligible.
>          */
>         page_pool_fragment_page(page, 1);
> -       if (pool->slow.init_callback)
> +       if (pool->has_init_callback)
>                 pool->slow.init_callback(page, pool->slow.init_arg);
>  }
>
> --
> 2.41.0
>

Same here, please swap my ack with
Reviewed-by: Ilias Apalodimas <ilias.apalodimas@...aro.org>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ