lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Thu, 18 Oct 2018 10:45:27 +0800
From:   Jason Wang <jasowang@...hat.com>
To:     jiangyiwen <jiangyiwen@...wei.com>, stefanha@...hat.com
Cc:     netdev@...r.kernel.org, kvm@...r.kernel.org,
        virtualization@...ts.linux-foundation.org
Subject: Re: [RFC] VSOCK: The performance problem of vhost_vsock.


On 2018/10/18 上午9:22, jiangyiwen wrote:
> On 2018/10/17 20:31, Jason Wang wrote:
>> On 2018/10/17 下午7:41, jiangyiwen wrote:
>>> On 2018/10/17 17:51, Jason Wang wrote:
>>>> On 2018/10/17 下午5:39, Jason Wang wrote:
>>>>>> Hi Jason and Stefan,
>>>>>>
>>>>>> Maybe I find the reason of bad performance.
>>>>>>
>>>>>> I found pkt_len is limited to VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE(4K),
>>>>>> it will cause the bandwidth is limited to 500~600MB/s. And once I
>>>>>> increase to 64k, it can improve about 3 times(~1500MB/s).
>>>>> Looks like the value was chosen for a balance between rx buffer size and performance. Allocating 64K always even for small packet is kind of waste and stress for guest memory. Virito-net try to avoid this by inventing the merge able rx buffer which allows big packet to be scattered in into different buffers. We can reuse this idea or revisit the idea of using virtio-net/vhost-net as a transport of vsock.
>>>>>
>>>>> What interesting is the performance is still behind vhost-net.
>>>>>
>>>>> Thanks
>>>>>
>>>>>> By the way, I send to 64K in application once, and I don't use
>>>>>> sg_init_one and rewrite function to packet sg list because pkt_len
>>>>>> include multiple pages.
>>>>>>
>>>>>> Thanks,
>>>>>> Yiwen.
>>>> Btw, if you're using vsock for transferring large files, maybe it's more efficient to implement sendpage() for vsock to allow sendfile()/splice() work.
>>>>
>>>> Thanks
>>>>
>>> I can't agree more.
>>>
>>> why vhost_vsock is still behind vhost_net?
>>> Because I use sendfile() to test performance at first, and then
>>> I found vsock don't implement sendpage() and cause the bandwidth
>>> can't be increased. So I use read() and send() to replace sendfile(),
>>> it will increase some switch between kernel and user mode, and sendfile()
>>> can support zero copy. I think this is main reason.
>>>
>>> Thanks.
>>
>> Want to post patches for this then :) ?
>>
>> Thanks
>>
> I may not post patches at the moment because there are other tasks.
>
> After a period of time, I will consider implement the feature.
>
> Thanks.


That's fine.

Thanks

Powered by blists - more mailing lists