[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <21d6dbd9-8f78-6939-0e80-27b470aeb00a@redhat.com>
Date: Fri, 9 Nov 2018 10:25:28 +0800
From: Jason Wang <jasowang@...hat.com>
To: "Michael S. Tsirkin" <mst@...hat.com>
Cc: Tiwei Bie <tiwei.bie@...el.com>,
virtualization@...ts.linux-foundation.org,
linux-kernel@...r.kernel.org, netdev@...r.kernel.org,
virtio-dev@...ts.oasis-open.org, wexu@...hat.com,
jfreimann@...hat.com
Subject: Re: [PATCH net-next v2 3/5] virtio_ring: add packed ring support
On 2018/11/8 下午10:14, Michael S. Tsirkin wrote:
> On Thu, Nov 08, 2018 at 04:18:25PM +0800, Jason Wang wrote:
>> On 2018/11/8 上午9:38, Tiwei Bie wrote:
>>>>> +
>>>>> + if (vq->vq.num_free < descs_used) {
>>>>> + pr_debug("Can't add buf len %i - avail = %i\n",
>>>>> + descs_used, vq->vq.num_free);
>>>>> + /* FIXME: for historical reasons, we force a notify here if
>>>>> + * there are outgoing parts to the buffer. Presumably the
>>>>> + * host should service the ring ASAP. */
>>>> I don't think we have a reason to do this for packed ring.
>>>> No historical baggage there, right?
>>> Based on the original commit log, it seems that the notify here
>>> is just an "optimization". But I don't quite understand what does
>>> the "the heuristics which KVM uses" refer to. If it's safe to drop
>>> this in packed ring, I'd like to do it.
>>
>> According to the commit log, it seems like a workaround of lguest networking
>> backend. I agree to drop it, we should not have such burden.
>>
>> But we should notice that, with this removed, the compare between packed vs
>> split is kind of unfair.
> I don't think this ever triggers to be frank. When would it?
I think it can happen e.g in the path of XDP transmission in
__virtnet_xdp_xmit_one():
err = virtqueue_add_outbuf(sq->vq, sq->sg, 1, xdpf, GFP_ATOMIC);
if (unlikely(err))
return -ENOSPC; /* Caller handle free/refcnt */
>
>> Consider the removal of lguest support recently,
>> maybe we can drop this for split ring as well?
>>
>> Thanks
> If it's helpful, then for sure we can drop it for virtio 1.
> Can you see any perf differences at all? With which device?
I don't test but consider the case of XDP_TX in guest plus vhost_net in
host. Since vhost_net is half duplex, it's pretty easier to trigger this
condition.
Thanks
>
>>> commit 44653eae1407f79dff6f52fcf594ae84cb165ec4
>>> Author: Rusty Russell<rusty@...tcorp.com.au>
>>> Date: Fri Jul 25 12:06:04 2008 -0500
>>>
>>> virtio: don't always force a notification when ring is full
>>> We force notification when the ring is full, even if the host has
>>> indicated it doesn't want to know. This seemed like a good idea at
>>> the time: if we fill the transmit ring, we should tell the host
>>> immediately.
>>> Unfortunately this logic also applies to the receiving ring, which is
>>> refilled constantly. We should introduce real notification thesholds
>>> to replace this logic. Meanwhile, removing the logic altogether breaks
>>> the heuristics which KVM uses, so we use a hack: only notify if there are
>>> outgoing parts of the new buffer.
>>> Here are the number of exits with lguest's crappy network implementation:
>>> Before:
>>> network xmit 7859051 recv 236420
>>> After:
>>> network xmit 7858610 recv 118136
>>> Signed-off-by: Rusty Russell<rusty@...tcorp.com.au>
>>>
>>> diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
>>> index 72bf8bc09014..21d9a62767af 100644
>>> --- a/drivers/virtio/virtio_ring.c
>>> +++ b/drivers/virtio/virtio_ring.c
>>> @@ -87,8 +87,11 @@ static int vring_add_buf(struct virtqueue *_vq,
>>> if (vq->num_free < out + in) {
>>> pr_debug("Can't add buf len %i - avail = %i\n",
>>> out + in, vq->num_free);
>>> - /* We notify*even if* VRING_USED_F_NO_NOTIFY is set here. */
>>> - vq->notify(&vq->vq);
>>> + /* FIXME: for historical reasons, we force a notify here if
>>> + * there are outgoing parts to the buffer. Presumably the
>>> + * host should service the ring ASAP. */
>>> + if (out)
>>> + vq->notify(&vq->vq);
>>> END_USE(vq);
>>> return -ENOSPC;
>>> }
>>>
>>>
Powered by blists - more mailing lists