[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Yzoou4UwOv5lh0hE@bullseye>
Date: Mon, 3 Oct 2022 00:11:39 +0000
From: Bobby Eshleman <bobbyeshleman@...il.com>
To: Stefano Garzarella <sgarzare@...hat.com>
Cc: "Michael S. Tsirkin" <mst@...hat.com>,
Bobby Eshleman <bobby.eshleman@...il.com>,
Bobby Eshleman <bobby.eshleman@...edance.com>,
Cong Wang <cong.wang@...edance.com>,
Jiang Wang <jiang.wang@...edance.com>,
Stefan Hajnoczi <stefanha@...hat.com>,
Jason Wang <jasowang@...hat.com>,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>, kvm@...r.kernel.org,
virtualization@...ts.linux-foundation.org, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] vsock: replace virtio_vsock_pkt with sk_buff
On Thu, Oct 06, 2022 at 09:34:10AM +0200, Stefano Garzarella wrote:
> On Thu, Oct 06, 2022 at 03:08:12AM -0400, Michael S. Tsirkin wrote:
> > On Wed, Oct 05, 2022 at 06:19:44PM -0700, Bobby Eshleman wrote:
> > > This patch replaces the struct virtio_vsock_pkt with struct sk_buff.
> > >
> > > Using sk_buff in vsock benefits it by a) allowing vsock to be extended
> > > for socket-related features like sockmap, b) vsock may in the future
> > > use other sk_buff-dependent kernel capabilities, and c) vsock shares
> > > commonality with other socket types.
> > >
> > > This patch is taken from the original series found here:
> > > https://lore.kernel.org/all/cover.1660362668.git.bobby.eshleman@bytedance.com/
> > >
> > > Small-sized packet throughput improved by ~5% (from 18.53 Mb/s to 19.51
> > > Mb/s). Tested using uperf, 16B payloads, 64 threads, 100s, averaged from
> > > 10 test runs (n=10). This improvement is likely due to packet merging.
> > >
> > > Large-sized packet throughput decreases ~9% (from 27.25 Gb/s to 25.04
> > > Gb/s). Tested using uperf, 64KB payloads, 64 threads, 100s, averaged
> > > from 10 test runs (n=10).
> > >
> > > Medium-sized packet throughput decreases ~5% (from 4.0 Gb/s to 3.81
> > > Gb/s). Tested using uperf, 4k to 8k payload sizes picked randomly
> > > according to normal distribution, 64 threads, 100s, averaged from 10
> > > test runs (n=10).
> >
> > It is surprizing to me that the original vsock code managed to outperform
> > the new one, given that to my knowledge we did not focus on optimizing it.
>
> Yeah mee to.
>
Indeed.
> From this numbers maybe the allocation cost has been reduced as it performs
> better with small packets. But with medium to large packets we perform
> worse, perhaps because previously we were allocating a contiguous buffer up
> to 64k?
> Instead alloc_skb() could allocate non-contiguous pages ? (which would solve
> the problems we saw a few days ago)
>
I think this would be the case with alloc_skb_with_frags(), but
internally alloc_skb() uses kmalloc() for the payload and sk_buff_head
slab allocations for the sk_buff itself (all the more confusing to me,
as the prior allocator also uses two separate allocations per packet).
> @Bobby Are these numbers for guest -> host communication? Can we try the
> reverse path as well?
>
Yep, these are guest -> host. Unfortunately, the numbers are worse for
host to guest. Running the same tests, except for 100+ times instead of
just 10, for h2g sockets:
16B payload throughput decreases ~8%.
4K-8KB payload throughput decreases ~15%.
64KB payload throughput decreases ~8%.
I'm currently working on tracking down the root cause and seeing if
there is some way around the performance hit.
Sorry for the delayed response, it took a good minute to collect
enough data to feel confident I wasn't just seeing noise.
Best,
Bobby
Powered by blists - more mailing lists