[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20190409091300.uozhdyikycb5blmn@steredhat>
Date: Tue, 9 Apr 2019 11:13:00 +0200
From: Stefano Garzarella <sgarzare@...hat.com>
To: Jason Wang <jasowang@...hat.com>
Cc: netdev@...r.kernel.org, "Michael S. Tsirkin" <mst@...hat.com>,
Stefan Hajnoczi <stefanha@...hat.com>, kvm@...r.kernel.org,
virtualization@...ts.linux-foundation.org,
linux-kernel@...r.kernel.org,
"David S. Miller" <davem@...emloft.net>
Subject: Re: [PATCH RFC 0/4] vsock/virtio: optimizations to increase the
throughput
On Mon, Apr 08, 2019 at 02:43:28PM +0800, Jason Wang wrote:
>
> On 2019/4/4 下午6:58, Stefano Garzarella wrote:
> > This series tries to increase the throughput of virtio-vsock with slight
> > changes:
> > - patch 1/4: reduces the number of credit update messages sent to the
> > transmitter
> > - patch 2/4: allows the host to split packets on multiple buffers,
> > in this way, we can remove the packet size limit to
> > VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE
> > - patch 3/4: uses VIRTIO_VSOCK_MAX_PKT_BUF_SIZE as the max packet size
> > allowed
> > - patch 4/4: increases RX buffer size to 64 KiB (affects only host->guest)
> >
> > RFC:
> > - maybe patch 4 can be replaced with multiple queues with different
> > buffer sizes or using EWMA to adapt the buffer size to the traffic
>
>
> Or EWMA + mergeable rx buffer, but if we decide to unify the datapath with
> virtio-net, we can reuse their codes.
>
>
> >
> > - as Jason suggested in a previous thread [1] I'll evaluate to use
> > virtio-net as transport, but I need to understand better how to
> > interface with it, maybe introducing sk_buff in virtio-vsock.
> >
> > Any suggestions?
>
>
> My understanding is this is not a must, but if it makes things easier, we
> can do this.
Hopefully it should simplify the maintainability and avoid duplicated code.
>
> Another thing that may help is to implement sendpage(), which will greatly
> improve the performance.
Thanks for your suggestions!
I'll try to implement sendpage() in VSOCK to measure the improvement.
Cheers,
Stefano
Powered by blists - more mailing lists