[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e582f248-28e9-4d0b-aa5d-26b82cd96a68@redhat.com>
Date: Tue, 19 Mar 2024 18:08:27 +1000
From: Gavin Shan <gshan@...hat.com>
To: "Michael S. Tsirkin" <mst@...hat.com>
Cc: Will Deacon <will@...nel.org>, virtualization@...ts.linux.dev,
linux-kernel@...r.kernel.org, jasowang@...hat.com,
xuanzhuo@...ux.alibaba.com, yihyu@...hat.com, shan.gavin@...il.com,
linux-arm-kernel@...ts.infradead.org,
Catalin Marinas <catalin.marinas@....com>, mochs@...dia.com
Subject: Re: [PATCH] virtio_ring: Fix the stale index in available ring
On 3/19/24 17:09, Michael S. Tsirkin wrote:
> On Tue, Mar 19, 2024 at 04:49:50PM +1000, Gavin Shan wrote:
>>
>> On 3/19/24 16:43, Michael S. Tsirkin wrote:
>>> On Tue, Mar 19, 2024 at 04:38:49PM +1000, Gavin Shan wrote:
>>>> On 3/19/24 16:09, Michael S. Tsirkin wrote:
>>>>
>>>>>>>> diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
>>>>>>>> index 49299b1f9ec7..7d852811c912 100644
>>>>>>>> --- a/drivers/virtio/virtio_ring.c
>>>>>>>> +++ b/drivers/virtio/virtio_ring.c
>>>>>>>> @@ -687,9 +687,15 @@ static inline int virtqueue_add_split(struct virtqueue *_vq,
>>>>>>>> avail = vq->split.avail_idx_shadow & (vq->split.vring.num - 1);
>>>>>>>> vq->split.vring.avail->ring[avail] = cpu_to_virtio16(_vq->vdev, head);
>>>>>>>> - /* Descriptors and available array need to be set before we expose the
>>>>>>>> - * new available array entries. */
>>>>>>>> - virtio_wmb(vq->weak_barriers);
>>>>>>>> + /*
>>>>>>>> + * Descriptors and available array need to be set before we expose
>>>>>>>> + * the new available array entries. virtio_wmb() should be enough
>>>>>>>> + * to ensuere the order theoretically. However, a stronger barrier
>>>>>>>> + * is needed by ARM64. Otherwise, the stale data can be observed
>>>>>>>> + * by the host (vhost). A stronger barrier should work for other
>>>>>>>> + * architectures, but performance loss is expected.
>>>>>>>> + */
>>>>>>>> + virtio_mb(false);
>>>>>>>> vq->split.avail_idx_shadow++;
>>>>>>>> vq->split.vring.avail->idx = cpu_to_virtio16(_vq->vdev,
>>>>>>>> vq->split.avail_idx_shadow);
>>>>>>>
>>>>>>> Replacing a DMB with a DSB is _very_ unlikely to be the correct solution
>>>>>>> here, especially when ordering accesses to coherent memory.
>>>>>>>
>>>>>>> In practice, either the larger timing different from the DSB or the fact
>>>>>>> that you're going from a Store->Store barrier to a full barrier is what
>>>>>>> makes things "work" for you. Have you tried, for example, a DMB SY
>>>>>>> (e.g. via __smb_mb()).
>>>>>>>
>>>>>>> We definitely shouldn't take changes like this without a proper
>>>>>>> explanation of what is going on.
>>>>>>>
>>>>>>
>>>>>> Thanks for your comments, Will.
>>>>>>
>>>>>> Yes, DMB should work for us. However, it seems this instruction has issues on
>>>>>> NVidia's grace-hopper. It's hard for me to understand how DMB and DSB works
>>>>>> from hardware level. I agree it's not the solution to replace DMB with DSB
>>>>>> before we fully understand the root cause.
>>>>>>
>>>>>> I tried the possible replacement like below. __smp_mb() can avoid the issue like
>>>>>> __mb() does. __ndelay(10) can avoid the issue, but __ndelay(9) doesn't.
>>>>>>
>>>>>> static inline int virtqueue_add_split(struct virtqueue *_vq, ...)
>>>>>> {
>>>>>> :
>>>>>> /* Put entry in available array (but don't update avail->idx until they
>>>>>> * do sync). */
>>>>>> avail = vq->split.avail_idx_shadow & (vq->split.vring.num - 1);
>>>>>> vq->split.vring.avail->ring[avail] = cpu_to_virtio16(_vq->vdev, head);
>>>>>>
>>>>>> /* Descriptors and available array need to be set before we expose the
>>>>>> * new available array entries. */
>>>>>> // Broken: virtio_wmb(vq->weak_barriers);
>>>>>> // Broken: __dma_mb();
>>>>>> // Work: __mb();
>>>>>> // Work: __smp_mb();
>>>>>> // Work: __ndelay(100);
>>>>>> // Work: __ndelay(10);
>>>>>> // Broken: __ndelay(9);
>>>>>>
>>>>>> vq->split.avail_idx_shadow++;
>>>>>> vq->split.vring.avail->idx = cpu_to_virtio16(_vq->vdev,
>>>>>> vq->split.avail_idx_shadow);
>>>>>
>>>>> What if you stick __ndelay here?
>>>>>
>>>>
>>>> /* Put entry in available array (but don't update avail->idx until they
>>>> * do sync). */
>>>> avail = vq->split.avail_idx_shadow & (vq->split.vring.num - 1);
>>>> vq->split.vring.avail->ring[avail] = cpu_to_virtio16(_vq->vdev, head);
>>>>
>>>> /* Descriptors and available array need to be set before we expose the
>>>> * new available array entries. */
>>>> virtio_wmb(vq->weak_barriers);
>>>> vq->split.avail_idx_shadow++;
>>>> vq->split.vring.avail->idx = cpu_to_virtio16(_vq->vdev,
>>>> vq->split.avail_idx_shadow);
>>>> /* Try __ndelay(x) here as Michael suggested
>>>> *
>>>> * Work: __ndelay(200); possiblly make it hard to reproduce
>>>> * Broken: __ndelay(100);
>>>> * Broken: __ndelay(20);
>>>> * Broken: __ndelay(10);
>>>> */
>>>> __ndelay(200);
>>>
>>> So we see that just changing the timing masks the race.
>>> What are you using on the host side? vhost or qemu?
>>>
>>
>> __ndelay(200) may make the issue harder to be reproduce as I understand.
>> More delays here will give vhost relief, reducing the race.
>>
>> The issue is only reproducible when vhost is turned on. Otherwise, we
>> aren't able to hit the issue.
>>
>> -netdev tap,id=vnet0,vhost=true,script=/etc/qemu-ifup,downscript=/etc/qemu-ifdown \
>> -device virtio-net-pci,bus=pcie.8,netdev=vnet0,mac=52:54:00:f1:26:b0
>
>
> Given it's vhost, it's also possible that the issue is host side.
> I wonder what happens if we stick a delay or a stronger barrier
> in vhost.c - either here:
>
> /* Make sure buffer is written before we update index. */
> smp_wmb();
>
>
> or here:
>
> /* Only get avail ring entries after they have been
> * exposed by guest.
> */
> smp_rmb();
>
> ?
>
It's possible. However, I still can hit the issue after both of them are
replaed with '__mb()'. So the issue seems on the guest side, where the
written data isn't observed in time by the CPU on far end (for vhost worker).
diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index 045f666b4f12..327b68d21b02 100644
--- a/drivers/vhost/vhost.c
+++ b/drivers/vhost/vhost.c
@@ -2529,7 +2529,8 @@ int vhost_get_vq_desc(struct vhost_virtqueue *vq,
/* Only get avail ring entries after they have been
* exposed by guest.
*/
- smp_rmb();
+ // smp_rmb();
+ __mb();
}
/* Grab the next descriptor number they're advertising, and increment
@@ -2703,7 +2704,9 @@ int vhost_add_used_n(struct vhost_virtqueue *vq, struct vring_used_elem *heads,
r = __vhost_add_used_n(vq, heads, count);
/* Make sure buffer is written before we update index. */
- smp_wmb();
+ // smp_wmb();
+ __mb();
+
From the guest:
[ 14.102608] virtio_net virtio0: output.0:id 80 is not a head!
>>>>
>>>>>
>>>>>> vq->num_added++;
>>>>>>
>>>>>> pr_debug("Added buffer head %i to %p\n", head, vq);
>>>>>> END_USE(vq);
>>>>>> :
>>>>>> }
>>>>>>
>>>>>> I also tried to measure the consumed time for various barrier-relative instructions using
>>>>>> ktime_get_ns() which should have consumed most of the time. __smb_mb() is slower than
>>>>>> __smp_wmb() but faster than __mb()
>>>>>>
>>>>>> Instruction Range of used time in ns
>>>>>> ----------------------------------------------
>>>>>> __smp_wmb() [32 1128032]
>>>>>> __smp_mb() [32 1160096]
>>>>>> __mb() [32 1162496]
>>>>>>
Thanks,
Gavin
Powered by blists - more mailing lists