lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAL+tcoCdLA2_N4sC-08X8d+UbE50g-Jf-CTkg-LSi4drVi2ENw@mail.gmail.com>
Date: Sun, 16 Nov 2025 09:07:53 +0800
From: Jason Xing <kerneljasonxing@...il.com>
To: Eric Dumazet <edumazet@...gle.com>
Cc: "David S . Miller" <davem@...emloft.net>, Jakub Kicinski <kuba@...nel.org>, 
	Paolo Abeni <pabeni@...hat.com>, Simon Horman <horms@...nel.org>, 
	Kuniyuki Iwashima <kuniyu@...gle.com>, netdev@...r.kernel.org, eric.dumazet@...il.com
Subject: Re: [PATCH v2 net-next 2/3] net: __alloc_skb() cleanup

On Fri, Nov 14, 2025 at 8:12 PM Eric Dumazet <edumazet@...gle.com> wrote:
>
> This patch refactors __alloc_skb() to prepare the following one,
> and does not change functionality.

Well, I think it changes a little bit. Please find below.

>
> Signed-off-by: Eric Dumazet <edumazet@...gle.com>
> ---
>  net/core/skbuff.c | 26 ++++++++++++++++----------
>  1 file changed, 16 insertions(+), 10 deletions(-)
>
> diff --git a/net/core/skbuff.c b/net/core/skbuff.c
> index 88b5530f9c460d86e12c98e410774444367e0404..c6b065c0a2af265159ee6188469936767a295729 100644
> --- a/net/core/skbuff.c
> +++ b/net/core/skbuff.c
> @@ -646,25 +646,31 @@ static void *kmalloc_reserve(unsigned int *size, gfp_t flags, int node,
>  struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask,
>                             int flags, int node)
>  {
> +       struct sk_buff *skb = NULL;
>         struct kmem_cache *cache;
> -       struct sk_buff *skb;
>         bool pfmemalloc;
>         u8 *data;
>
> -       cache = (flags & SKB_ALLOC_FCLONE)
> -               ? net_hotdata.skbuff_fclone_cache : net_hotdata.skbuff_cache;
> -
>         if (sk_memalloc_socks() && (flags & SKB_ALLOC_RX))
>                 gfp_mask |= __GFP_MEMALLOC;
>
> -       /* Get the HEAD */
> -       if ((flags & (SKB_ALLOC_FCLONE | SKB_ALLOC_NAPI)) == SKB_ALLOC_NAPI &&
> -           likely(node == NUMA_NO_NODE || node == numa_mem_id()))
> +       if (flags & SKB_ALLOC_FCLONE) {
> +               cache = net_hotdata.skbuff_fclone_cache;
> +               goto fallback;
> +       }
> +       cache = net_hotdata.skbuff_cache;
> +       if (unlikely(node != NUMA_NO_NODE && node != numa_mem_id()))
> +               goto fallback;
> +
> +       if (flags & SKB_ALLOC_NAPI)
>                 skb = napi_skb_cache_get(true);

IIUC, if it fails to allocate the skb, then...

> -       else
> +
> +       if (!skb) {
> +fallback:
>                 skb = kmem_cache_alloc_node(cache, gfp_mask & ~GFP_DMA, node);

...it will retry another way to allocate skb?

Thanks,
Jason

> -       if (unlikely(!skb))
> -               return NULL;
> +               if (unlikely(!skb))
> +                       return NULL;
> +       }
>         prefetchw(skb);
>
>         /* We do our best to align skb_shared_info on a separate cache
> --
> 2.52.0.rc1.455.g30608eb744-goog
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ