[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <64d3921ed1f1a_267bde294f2@willemb.c.googlers.com.notmuch>
Date: Wed, 09 Aug 2023 09:18:22 -0400
From: Willem de Bruijn <willemdebruijn.kernel@...il.com>
To: Tahsin Erdogan <trdgn@...zon.com>,
Willem de Bruijn <willemdebruijn.kernel@...il.com>,
Jason Wang <jasowang@...hat.com>,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>,
Herbert Xu <herbert@...dor.apana.org.au>
Cc: Tahsin Erdogan <trdgn@...zon.com>,
netdev@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: RE: [PATCH v3] tun: avoid high-order page allocation for packet
header
Tahsin Erdogan wrote:
> When GSO is not enabled
Not GSO, but gso.hdr_len, which is a feature of IFF_VNET_HDR.
VIRTIO_NET_HDR_GSO_* does not need to be enabled to use the
header length field.
> and a packet is transmitted via writev(), all
> payload is treated as header which requires a contiguous memory allocation.
> This allocation request is harder to satisfy, and may even fail if there is
> enough fragmentation.
>
> Note that sendmsg() code path limits the linear copy length, so this change
> makes writev() and sendmsg() more consistent.
This is not specific to writev(), equally to more common write().
Tun sendmsg is a special case, only used by vhost-net from inside the
kernel. Arguably consistency with packet_snd/packet_alloc_skb would be
more important. That said, this makes sense to me. I assume your
configuring a device with very large MTU?
> Signed-off-by: Tahsin Erdogan <trdgn@...zon.com>
> ---
> v3: rebase to latest net-next
> v2: replace linear == 0 with !linear
> v1: https://lore.kernel.org/all/20230726030936.1587269-1-trdgn@amazon.com/
> drivers/net/tun.c | 5 ++++-
> 1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/net/tun.c b/drivers/net/tun.c
> index 5beb6b5dd7e5..53d19c958a20 100644
> --- a/drivers/net/tun.c
> +++ b/drivers/net/tun.c
> @@ -1523,7 +1523,7 @@ static struct sk_buff *tun_alloc_skb(struct tun_file *tfile,
> int err;
>
> /* Under a page? Don't bother with paged skb. */
> - if (prepad + len < PAGE_SIZE || !linear)
> + if (prepad + len < PAGE_SIZE)
> linear = len;
>
> if (len - linear > MAX_SKB_FRAGS * (PAGE_SIZE << PAGE_ALLOC_COSTLY_ORDER))
> @@ -1913,6 +1913,9 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile,
> */
> zerocopy = false;
> } else {
> + if (!linear)
> + linear = min_t(size_t, good_linear, copylen);
> +
> skb = tun_alloc_skb(tfile, align, copylen, linear,
> noblock);
> }
> --
> 2.41.0
>
Powered by blists - more mailing lists