[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b44684a5-13b4-4717-a653-cfd0c920bb49@kernel.dk>
Date: Fri, 12 Apr 2024 07:44:36 -0600
From: Jens Axboe <axboe@...nel.dk>
To: Pavel Begunkov <asml.silence@...il.com>, io-uring@...r.kernel.org,
netdev@...r.kernel.org
Cc: "David S . Miller" <davem@...emloft.net>, Jakub Kicinski
<kuba@...nel.org>, David Ahern <dsahern@...nel.org>,
Eric Dumazet <edumazet@...gle.com>,
Willem de Bruijn <willemdebruijn.kernel@...il.com>
Subject: Re: [RFC 0/6] implement io_uring notification (ubuf_info) stacking
On 4/12/24 6:55 AM, Pavel Begunkov wrote:
> io_uring allocates a ubuf_info per zerocopy send request, it's convenient
> for the userspace but with how things are it means that every time the
> TCP stack has to allocate a new skb instead of amending into a previous
> one. Unless sends are big enough, there will be lots of small skbs
> straining the stack and dipping performance.
>
> The patchset implements notification, i.e. an io_uring's ubuf_info
> extension, stacking. It tries to link ubuf_info's into a list, and
> the entire link will be put down together once all references are
> gone.
Excellent! I'll take a closer look, but I ran a quick test with my test
tool just to see the difference. This is on a 100G link.
Packet size Before (Mbit) After (Mbit) Diff
====================================================
100 290 1250 4.3x
200 560 2460 4.4x
400 1190 4900 4.1x
800 2300 9700 4.2x
1600 4500 19100 4.2x
3200 8900 35000 3.9x
which are just rough numbers and the tool isn't that great, but
definitely encouraging. And it does have parity with sync MSG_ZEROPCY,
which is what I was really bugged about before.
--
Jens Axboe
Powered by blists - more mailing lists