[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240319182125.GA3121@willie-the-truck>
Date: Tue, 19 Mar 2024 18:21:25 +0000
From: Will Deacon <will@...nel.org>
To: "Michael S. Tsirkin" <mst@...hat.com>
Cc: Gavin Shan <gshan@...hat.com>, virtualization@...ts.linux.dev,
linux-kernel@...r.kernel.org, jasowang@...hat.com,
xuanzhuo@...ux.alibaba.com, yihyu@...hat.com, shan.gavin@...il.com
Subject: Re: [PATCH] virtio_ring: Fix the stale index in available ring
On Tue, Mar 19, 2024 at 03:36:31AM -0400, Michael S. Tsirkin wrote:
> On Mon, Mar 18, 2024 at 04:59:24PM +0000, Will Deacon wrote:
> > On Thu, Mar 14, 2024 at 05:49:23PM +1000, Gavin Shan wrote:
> > > diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> > > index 49299b1f9ec7..7d852811c912 100644
> > > --- a/drivers/virtio/virtio_ring.c
> > > +++ b/drivers/virtio/virtio_ring.c
> > > @@ -687,9 +687,15 @@ static inline int virtqueue_add_split(struct virtqueue *_vq,
> > > avail = vq->split.avail_idx_shadow & (vq->split.vring.num - 1);
> > > vq->split.vring.avail->ring[avail] = cpu_to_virtio16(_vq->vdev, head);
> > >
> > > - /* Descriptors and available array need to be set before we expose the
> > > - * new available array entries. */
> > > - virtio_wmb(vq->weak_barriers);
> > > + /*
> > > + * Descriptors and available array need to be set before we expose
> > > + * the new available array entries. virtio_wmb() should be enough
> > > + * to ensuere the order theoretically. However, a stronger barrier
> > > + * is needed by ARM64. Otherwise, the stale data can be observed
> > > + * by the host (vhost). A stronger barrier should work for other
> > > + * architectures, but performance loss is expected.
> > > + */
> > > + virtio_mb(false);
> > > vq->split.avail_idx_shadow++;
> > > vq->split.vring.avail->idx = cpu_to_virtio16(_vq->vdev,
> > > vq->split.avail_idx_shadow);
> >
> > Replacing a DMB with a DSB is _very_ unlikely to be the correct solution
> > here, especially when ordering accesses to coherent memory.
> >
> > In practice, either the larger timing different from the DSB or the fact
> > that you're going from a Store->Store barrier to a full barrier is what
> > makes things "work" for you. Have you tried, for example, a DMB SY
> > (e.g. via __smb_mb()).
> >
> > We definitely shouldn't take changes like this without a proper
> > explanation of what is going on.
>
> Just making sure: so on this system, how do
> smp_wmb() and wmb() differ? smb_wmb is normally for synchronizing
> with kernel running on another CPU and we are doing something
> unusual in virtio when we use it to synchronize with host
> as opposed to the guest - e.g. CONFIG_SMP is special cased
> because of this:
>
> #define virt_wmb() do { kcsan_wmb(); __smp_wmb(); } while (0)
>
> Note __smp_wmb not smp_wmb which would be a NOP on UP.
I think that should be fine (as long as the NOP is a barrier() for the
compiler).
wmb() uses a DSB, but that is only really relevant to endpoint ordering
with real I/O devices, e.g. if for some reason you need to guarantee
that a write has made it all the way to the device before proceeding.
Even then you're slightly at the mercy of the memory type and the
interconnect not giving an early acknowledgement, so the extra ordering
is rarely needed in practice and we don't even provide it for our I/O
accessors (e.g. writel() and readl()).
So for virtio between two CPUs using coherent memory, DSB is a red
herring.
Will
Powered by blists - more mailing lists