[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8edff784-b311-bfb6-1bf4-1970d564279d@redhat.com>
Date: Wed, 17 Oct 2018 17:51:23 +0800
From: Jason Wang <jasowang@...hat.com>
To: jiangyiwen <jiangyiwen@...wei.com>, stefanha@...hat.com
Cc: netdev@...r.kernel.org, kvm@...r.kernel.org,
virtualization@...ts.linux-foundation.org
Subject: Re: [RFC] VSOCK: The performance problem of vhost_vsock.
On 2018/10/17 下午5:39, Jason Wang wrote:
>>>
>> Hi Jason and Stefan,
>>
>> Maybe I find the reason of bad performance.
>>
>> I found pkt_len is limited to VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE(4K),
>> it will cause the bandwidth is limited to 500~600MB/s. And once I
>> increase to 64k, it can improve about 3 times(~1500MB/s).
>
>
> Looks like the value was chosen for a balance between rx buffer size
> and performance. Allocating 64K always even for small packet is kind
> of waste and stress for guest memory. Virito-net try to avoid this by
> inventing the merge able rx buffer which allows big packet to be
> scattered in into different buffers. We can reuse this idea or revisit
> the idea of using virtio-net/vhost-net as a transport of vsock.
>
> What interesting is the performance is still behind vhost-net.
>
> Thanks
>
>>
>> By the way, I send to 64K in application once, and I don't use
>> sg_init_one and rewrite function to packet sg list because pkt_len
>> include multiple pages.
>>
>> Thanks,
>> Yiwen.
Btw, if you're using vsock for transferring large files, maybe it's more
efficient to implement sendpage() for vsock to allow sendfile()/splice()
work.
Thanks
Powered by blists - more mailing lists