[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <20250630064206.70948-1-yangfeng59949@163.com>
Date: Mon, 30 Jun 2025 14:42:06 +0800
From: Feng Yang <yangfeng59949@....com>
To: david.laight.linux@...il.com
Cc: aleksander.lobakin@...el.com,
almasrymina@...gle.com,
asml.silence@...il.com,
davem@...emloft.net,
ebiggers@...gle.com,
edumazet@...gle.com,
horms@...nel.org,
kerneljasonxing@...il.com,
kuba@...nel.org,
linux-kernel@...r.kernel.org,
netdev@...r.kernel.org,
pabeni@...hat.com,
stfomichev@...il.com,
willemb@...gle.com,
yangfeng59949@....com,
yangfeng@...inos.cn
Subject: Re: [PATCH v2] skbuff: Improve the sending efficiency of __skb_send_sock
On Fri, 27 Jun 2025 03:23:24 -0700 Eric Dumazet <edumazet@...gle.com> wrote:
> On Fri, Jun 27, 2025 at 3:19 AM Eric Dumazet <edumazet@...gle.com> wrote:
> >
> > On Fri, Jun 27, 2025 at 2:44 AM Feng Yang <yangfeng59949@....com> wrote:
> > >
> > > From: Feng Yang <yangfeng@...inos.cn>
> > >
> > > By aggregating skb data into a bvec array for transmission, when using sockmap to forward large packets,
> > > what previously required multiple transmissions now only needs a single transmission, which significantly enhances performance.
> > > For small packets, the performance remains comparable to the original level.
> > >
> > > When using sockmap for forwarding, the average latency for different packet sizes
> > > after sending 10,000 packets is as follows:
> > > size old(us) new(us)
> > > 512 56 55
> > > 1472 58 58
> > > 1600 106 79
> > > 3000 145 108
> > > 5000 182 123
> > >
> > > Signed-off-by: Feng Yang <yangfeng@...inos.cn>
> >
> > Instead of changing everything, have you tried strategically adding
> > MSG_MORE in this function ?
>
> Untested patch:
>
> diff --git a/net/core/skbuff.c b/net/core/skbuff.c
> index d6420b74ea9c6a9c53a7c16634cce82a1cd1bbd3..b0f5e8898fdf450129948d829240b570f3cbf9eb
> 100644
> --- a/net/core/skbuff.c
> +++ b/net/core/skbuff.c
> @@ -3252,6 +3252,8 @@ static int __skb_send_sock(struct sock *sk,
> struct sk_buff *skb, int offset,
> kv.iov_len = slen;
> memset(&msg, 0, sizeof(msg));
> msg.msg_flags = MSG_DONTWAIT | flags;
> + if (slen < len)
> + msg.msg_flags |= MSG_MORE;
>
> iov_iter_kvec(&msg.msg_iter, ITER_SOURCE, &kv, 1, slen);
> ret = INDIRECT_CALL_2(sendmsg, sendmsg_locked,
> @@ -3292,6 +3294,8 @@ static int __skb_send_sock(struct sock *sk,
> struct sk_buff *skb, int offset,
> flags,
> };
>
> + if (slen < len)
> + msg.msg_flags |= MSG_MORE;
> bvec_set_page(&bvec, skb_frag_page(frag), slen,
> skb_frag_off(frag) + offset);
> iov_iter_bvec(&msg.msg_iter, ITER_SOURCE, &bvec, 1,
After testing, there is a performance improvement for large packets in both TCP and UDP.
Thanks.
Powered by blists - more mailing lists