lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CF8FF91A-2197-47F7-882B-33967C9C6089@nutanix.com>
Date: Tue, 2 Dec 2025 16:49:54 +0000
From: Jon Kohler <jon@...anix.com>
To: Jason Wang <jasowang@...hat.com>
CC: "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        Willem de Bruijn
	<willemdebruijn.kernel@...il.com>,
        Andrew Lunn <andrew+netdev@...n.ch>,
        "David S. Miller" <davem@...emloft.net>,
        Eric Dumazet <edumazet@...gle.com>, Jakub Kicinski <kuba@...nel.org>,
        Paolo Abeni <pabeni@...hat.com>, Alexei
 Starovoitov <ast@...nel.org>,
        Daniel Borkmann <daniel@...earbox.net>,
        Jesper
 Dangaard Brouer <hawk@...nel.org>,
        John Fastabend <john.fastabend@...il.com>,
        Stanislav Fomichev <sdf@...ichev.me>,
        open list
	<linux-kernel@...r.kernel.org>,
        "open list:XDP (eXpress Data
 Path):Keyword:(?:b|_)xdp(?:b|_)" <bpf@...r.kernel.org>
Subject: Re: [PATCH net-next v2 5/9] tun: use bulk NAPI cache allocation in
 tun_xdp_one



> On Nov 27, 2025, at 10:02 PM, Jason Wang <jasowang@...hat.com> wrote:
> 
> On Wed, Nov 26, 2025 at 3:19 AM Jon Kohler <jon@...anix.com> wrote:
>> 
>> Optimize TUN_MSG_PTR batch processing by allocating sk_buff structures
>> in bulk from the per-CPU NAPI cache using napi_skb_cache_get_bulk.
>> This reduces allocation overhead and improves efficiency, especially
>> when IFF_NAPI is enabled and GRO is feeding entries back to the cache.
> 
> Does this mean we should only enable this when NAPI is used?

No, it does not mean that at all, but I see what that would be confusing.
I can clean up the commit msg on the next go around

>> 
>> If bulk allocation cannot fully satisfy the batch, gracefully drop only
>> the uncovered portion, allowing the rest of the batch to proceed, which
>> is what already happens in the previous case where build_skb() would
>> fail and return -ENOMEM.
>> 
>> Signed-off-by: Jon Kohler <jon@...anix.com>
> 
> Do we have any benchmark result for this?

Yes, it is in the cover letter:
https://patchwork.kernel.org/project/netdevbpf/cover/20251125200041.1565663-1-jon@nutanix.com/

>> ---
>> drivers/net/tun.c | 30 ++++++++++++++++++++++++------
>> 1 file changed, 24 insertions(+), 6 deletions(-)
>> 
>> diff --git a/drivers/net/tun.c b/drivers/net/tun.c
>> index 97f130bc5fed..64f944cce517 100644
>> --- a/drivers/net/tun.c
>> +++ b/drivers/net/tun.c
>> @@ -2420,13 +2420,13 @@ static void tun_put_page(struct tun_page *tpage)
>> static int tun_xdp_one(struct tun_struct *tun,
>>                       struct tun_file *tfile,
>>                       struct xdp_buff *xdp, int *flush,
>> -                      struct tun_page *tpage)
>> +                      struct tun_page *tpage,
>> +                      struct sk_buff *skb)
>> {
>>        unsigned int datasize = xdp->data_end - xdp->data;
>>        struct virtio_net_hdr *gso = xdp->data_hard_start;
>>        struct virtio_net_hdr_v1_hash_tunnel *tnl_hdr;
>>        struct bpf_prog *xdp_prog;
>> -       struct sk_buff *skb = NULL;
>>        struct sk_buff_head *queue;
>>        netdev_features_t features;
>>        u32 rxhash = 0, act;
>> @@ -2437,6 +2437,7 @@ static int tun_xdp_one(struct tun_struct *tun,
>>        struct page *page;
>> 
>>        if (unlikely(datasize < ETH_HLEN)) {
>> +               kfree_skb_reason(skb, SKB_DROP_REASON_PKT_TOO_SMALL);
>>                dev_core_stats_rx_dropped_inc(tun->dev);
>>                return -EINVAL;
>>        }
>> @@ -2454,6 +2455,7 @@ static int tun_xdp_one(struct tun_struct *tun,
>>                ret = tun_xdp_act(tun, xdp_prog, xdp, act);
>>                if (ret < 0) {
>>                        /* tun_xdp_act already handles drop statistics */
>> +                       kfree_skb_reason(skb, SKB_DROP_REASON_XDP);
> 
> This should belong to previous patches?

Well, not really, as we did not even have an SKB to free at this point
in the previous code
> 
>>                        put_page(virt_to_head_page(xdp->data));
>>                        return ret;
>>                }
>> @@ -2463,6 +2465,7 @@ static int tun_xdp_one(struct tun_struct *tun,
>>                        *flush = true;
>>                        fallthrough;
>>                case XDP_TX:
>> +                       napi_consume_skb(skb, 1);
>>                        return 0;
>>                case XDP_PASS:
>>                        break;
>> @@ -2475,13 +2478,15 @@ static int tun_xdp_one(struct tun_struct *tun,
>>                                tpage->page = page;
>>                                tpage->count = 1;
>>                        }
>> +                       napi_consume_skb(skb, 1);
> 
> I wonder if this would have any side effects since tun_xdp_one() is
> not called by a NAPI.

As far as I can tell, this napi_consume_skb is really just an artifact of
how it was named and how it was traditionally used. 

Now this is really just a napi_consume_skb within a bh disable/enable
section, which should meet the requirements of how that interface
should be used (again, AFAICT)

> 
>>                        return 0;
>>                }
>>        }
>> 
>> build:
>> -       skb = build_skb(xdp->data_hard_start, buflen);
>> +       skb = build_skb_around(skb, xdp->data_hard_start, buflen);
>>        if (!skb) {
>> +               kfree_skb_reason(skb, SKB_DROP_REASON_NOMEM);

Though to your point, I dont think this actually does anything given
that if the skb was somehow nuked as part of build_skb_around, there
would not be an skb to free. Doesn’t hurt though, from a self documenting
code perspective tho?

>>                dev_core_stats_rx_dropped_inc(tun->dev);
>>                return -ENOMEM;
>>        }
>> @@ -2566,9 +2571,11 @@ static int tun_sendmsg(struct socket *sock, struct msghdr *m, size_t total_len)
>>        if (m->msg_controllen == sizeof(struct tun_msg_ctl) &&
>>            ctl && ctl->type == TUN_MSG_PTR) {
>>                struct bpf_net_context __bpf_net_ctx, *bpf_net_ctx;
>> +               int flush = 0, queued = 0, num_skbs = 0;
>>                struct tun_page tpage;
>>                int n = ctl->num;
>> -               int flush = 0, queued = 0;
>> +               /* Max size of VHOST_NET_BATCH */
>> +               void *skbs[64];
> 
> I think we need some tweaks
> 
> 1) TUN is decoupled from vhost, so it should have its own value (a
> macro is better)

Sure, I can make another constant that does a similar thing

> 2) Provide a way to fail or handle the case when more than 64

What if we simply assert that the maximum here is 64, which I think
is what it actually is in practice?

> 
>> 
>>                memset(&tpage, 0, sizeof(tpage));
>> 
>> @@ -2576,13 +2583,24 @@ static int tun_sendmsg(struct socket *sock, struct msghdr *m, size_t total_len)
>>                rcu_read_lock();
>>                bpf_net_ctx = bpf_net_ctx_set(&__bpf_net_ctx);
>> 
>> -               for (i = 0; i < n; i++) {
>> +               num_skbs = napi_skb_cache_get_bulk(skbs, n);
> 
> Its document said:
> 
> """
> * Must be called *only* from the BH context.
> “"”
We’re in a bh_disable section here, is that not good enough?
> 
>> +
>> +               for (i = 0; i < num_skbs; i++) {
>> +                       struct sk_buff *skb = skbs[i];
>>                        xdp = &((struct xdp_buff *)ctl->ptr)[i];
>> -                       ret = tun_xdp_one(tun, tfile, xdp, &flush, &tpage);
>> +                       ret = tun_xdp_one(tun, tfile, xdp, &flush, &tpage,
>> +                                         skb);
>>                        if (ret > 0)
>>                                queued += ret;
>>                }
>> 
>> +               /* Handle remaining xdp_buff entries if num_skbs < ctl->num */
>> +               for (i = num_skbs; i < ctl->num; i++) {
>> +                       xdp = &((struct xdp_buff *)ctl->ptr)[i];
>> +                       dev_core_stats_rx_dropped_inc(tun->dev);
> 
> Could we do this in a batch?

I suspect this will be a very, very rare case, so I dont think optimizing it
(or complicating it any more) does much good, no?

> 
>> +                       put_page(virt_to_head_page(xdp->data));
>> +               }
>> +
>>                if (flush)
>>                        xdp_do_flush();
>> 
>> --
>> 2.43.0
>> 
> 
> Thanks


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ