[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20180516055520.GA21116@debian>
Date: Wed, 16 May 2018 13:55:20 +0800
From: Tiwei Bie <tiwei.bie@...el.com>
To: Jason Wang <jasowang@...hat.com>
Cc: mst@...hat.com, virtualization@...ts.linux-foundation.org,
linux-kernel@...r.kernel.org, netdev@...r.kernel.org,
wexu@...hat.com, jfreimann@...hat.com
Subject: Re: [RFC v3 4/5] virtio_ring: add event idx support in packed ring
On Wed, May 16, 2018 at 01:01:04PM +0800, Jason Wang wrote:
> On 2018年04月25日 13:15, Tiwei Bie wrote:
[...]
> > @@ -1143,10 +1160,17 @@ static unsigned virtqueue_enable_cb_prepare_packed(struct virtqueue *_vq)
> > /* We optimistically turn back on interrupts, then check if there was
> > * more to do. */
> > + /* Depending on the VIRTIO_RING_F_USED_EVENT_IDX feature, we need to
> > + * either clear the flags bit or point the event index at the next
> > + * entry. Always update the event index to keep code simple. */
> > +
> > + vq->vring_packed.driver->off_wrap = cpu_to_virtio16(_vq->vdev,
> > + vq->last_used_idx | (vq->wrap_counter << 15));
>
>
> Using vq->wrap_counter seems not correct, what we need is the warp counter
> for the last_used_idx not next_avail_idx.
Yes, you're right. I have fixed it in my local repo,
but haven't sent out a new version yet.
I'll try to send out a new RFC today.
>
> And I think there's even no need to bother with event idx here, how about
> just set VRING_EVENT_F_ENABLE?
We had a similar discussion before. Michael prefers
to use VRING_EVENT_F_DESC when possible to avoid
extra interrupts if host is fast:
https://lkml.org/lkml/2018/4/16/1085
"""
I suspect this will lead to extra interrupts if host is fast.
So I think for now we should always use VRING_EVENT_F_DESC
if EVENT_IDX is negotiated.
"""
>
> > if (vq->event_flags_shadow == VRING_EVENT_F_DISABLE) {
> > virtio_wmb(vq->weak_barriers);
> > - vq->event_flags_shadow = VRING_EVENT_F_ENABLE;
> > + vq->event_flags_shadow = vq->event ? VRING_EVENT_F_DESC :
> > + VRING_EVENT_F_ENABLE;
> > vq->vring_packed.driver->flags = cpu_to_virtio16(_vq->vdev,
> > vq->event_flags_shadow);
> > }
> > @@ -1172,15 +1196,34 @@ static bool virtqueue_poll_packed(struct virtqueue *_vq, unsigned last_used_idx)
> > static bool virtqueue_enable_cb_delayed_packed(struct virtqueue *_vq)
> > {
> > struct vring_virtqueue *vq = to_vvq(_vq);
> > + u16 bufs, used_idx, wrap_counter;
> > START_USE(vq);
> > /* We optimistically turn back on interrupts, then check if there was
> > * more to do. */
> > + /* Depending on the VIRTIO_RING_F_USED_EVENT_IDX feature, we need to
> > + * either clear the flags bit or point the event index at the next
> > + * entry. Always update the event index to keep code simple. */
> > +
> > + /* TODO: tune this threshold */
> > + bufs = (u16)(vq->next_avail_idx - vq->last_used_idx) * 3 / 4;
>
> bufs could be more than vq->num here, is this intended?
Yes, you're right. Like the above one -- I have fixed
it in my local repo, but haven't sent out a new version
yet. Thanks for spotting this!
>
> > +
> > + used_idx = vq->last_used_idx + bufs;
> > + wrap_counter = vq->wrap_counter;
> > +
> > + if (used_idx >= vq->vring_packed.num) {
> > + used_idx -= vq->vring_packed.num;
> > + wrap_counter ^= 1;
>
> When used_idx is greater or equal vq->num, there's no need to flip
> warp_counter bit since it should match next_avail_idx.
>
> And we need also care about the case when next_avail wraps but used_idx not.
> so we probaly need:
>
> else if (vq->next_avail_idx < used_idx) {
> wrap_counter ^= 1;
> }
>
> I think maybe it's time to add some sample codes in the spec to avoid
> duplicating the efforts(bugs).
+1
Best regards,
Tiwei Bie
Powered by blists - more mailing lists