lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 6 Nov 2018 10:41:24 +0800
From:   Jason Wang <jasowang@...hat.com>
To:     jiangyiwen <jiangyiwen@...wei.com>, stefanha@...hat.com
Cc:     netdev@...r.kernel.org, kvm@...r.kernel.org,
        virtualization@...ts.linux-foundation.org
Subject: Re: [PATCH 0/5] VSOCK: support mergeable rx buffer in vhost-vsock


On 2018/11/6 上午10:17, jiangyiwen wrote:
> On 2018/11/5 17:21, Jason Wang wrote:
>> On 2018/11/5 下午3:43, jiangyiwen wrote:
>>> Now vsock only support send/receive small packet, it can't achieve
>>> high performance. As previous discussed with Jason Wang, I revisit the
>>> idea of vhost-net about mergeable rx buffer and implement the mergeable
>>> rx buffer in vhost-vsock, it can allow big packet to be scattered in
>>> into different buffers and improve performance obviously.
>>>
>>> I write a tool to test the vhost-vsock performance, mainly send big
>>> packet(64K) included guest->Host and Host->Guest. The result as
>>> follows:
>>>
>>> Before performance:
>>>                 Single socket            Multiple sockets(Max Bandwidth)
>>> Guest->Host   ~400MB/s                 ~480MB/s
>>> Host->Guest   ~1450MB/s                ~1600MB/s
>>>
>>> After performance:
>>>                 Single socket            Multiple sockets(Max Bandwidth)
>>> Guest->Host   ~1700MB/s                ~2900MB/s
>>> Host->Guest   ~1700MB/s                ~2900MB/s
>>>
>>>   From the test results, the performance is improved obviously, and guest
>>> memory will not be wasted.
>> Hi:
>>
>> Thanks for the patches and the numbers are really impressive.
>>
>> But instead of duplicating codes between sock and net. I was considering to use virtio-net as a transport of vsock. Then we may have all existed features likes batching, mergeable rx buffers and multiqueue. Want to consider this idea? Thoughts?
>>
>>
> Hi Jason,
>
> I am not very familiar with virtio-net, so I am afraid I can't give too
> much effective advice. Then I have several problems:
>
> 1. If use virtio-net as a transport, guest should see a virtio-net
> device instead of virtio-vsock device, right? Is vsock only as a
> transport between socket and net_device? User should still use
> AF_VSOCK type to create socket, right?


Well, there're many choices. What you need is just to keep the socket 
API and hide the implementation. For example, you can keep the vosck 
device in guest and switch to use vhost-net in host. We probably need a 
new feature bit or header to let vhost know we are passing vsock packet. 
And vhost-net could forward the packet to vsock core on host.


>
> 2. I want to know if this idea has already started, and how is
> the current progress?


Not yet started.  Just want to listen from the community. If this sounds 
good, do you have interest in implementing this?


>
> 3. And what is stefan's idea?


Talk with Stefan a little on this during KVM Forum. I think he tends to 
agree on this idea. Anyway, let's wait for his reply.


Thanks


>
> Thanks,
> Yiwen.
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ