[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201209093836.GA62204@mtl-vdi-166.wap.labs.mlnx>
Date: Wed, 9 Dec 2020 11:38:36 +0200
From: Eli Cohen <elic@...dia.com>
To: "Michael S. Tsirkin" <mst@...hat.com>
CC: <jasowang@...hat.com>, <virtualization@...ts.linux-foundation.org>,
<linux-kernel@...r.kernel.org>, <lulu@...hat.com>
Subject: Re: [PATCH] vdpa/mlx5: Use write memory barrier after updating CQ
index
On Wed, Dec 09, 2020 at 03:05:42AM -0500, Michael S. Tsirkin wrote:
> On Wed, Dec 09, 2020 at 08:58:46AM +0200, Eli Cohen wrote:
> > On Wed, Dec 09, 2020 at 01:46:22AM -0500, Michael S. Tsirkin wrote:
> > > On Wed, Dec 09, 2020 at 08:02:30AM +0200, Eli Cohen wrote:
> > > > On Tue, Dec 08, 2020 at 04:45:04PM -0500, Michael S. Tsirkin wrote:
> > > > > On Sun, Dec 06, 2020 at 12:57:19PM +0200, Eli Cohen wrote:
> > > > > > Make sure to put write memory barrier after updating CQ consumer index
> > > > > > so the hardware knows that there are available CQE slots in the queue.
> > > > > >
> > > > > > Failure to do this can cause the update of the RX doorbell record to get
> > > > > > updated before the CQ consumer index resulting in CQ overrun.
> > > > > >
> > > > > > Change-Id: Ib0ae4c118cce524c9f492b32569179f3c1f04cc1
> > > > > > Fixes: 1a86b377aa21 ("vdpa/mlx5: Add VDPA driver for supported mlx5 devices")
> > > > > > Signed-off-by: Eli Cohen <elic@...dia.com>
> > > > >
> > > > > Aren't both memory writes?
> > > >
> > > > Not sure what exactly you mean here.
> > >
> > > Both updates are CPU writes into RAM that hardware then reads
> > > using DMA.
> > >
> >
> > You mean why I did not put a memory barrier right after updating the
> > recieve doorbell record?
>
> Sorry about being unclear. I just tried to give justification for why
> dma_wmb seems more appropriate than wmb here. If you need to
> order memory writes wrt writes to card, that is different, but generally
> writeX and friends will handle the ordering for you, except when
> using relaxed memory mappings - then wmb is generally necessary.
>
Bear in mind, we're writing to memory (not io memory). In this case, we
want this write to be visible my the DMA device.
https://www.kernel.org/doc/Documentation/memory-barriers.txt gives a
similar example using dma_wmb() to flush updates to make them visible
by the hardware before notifying the hardware to come and inspect this
memory.
> > I thought about this and I think it is not required. Suppose it takes a
> > very long time till the hardware can actually see this update. The worst
> > effect would be that the hardware will drop received packets if it does
> > sees none available due to the delayed update. Eventually it will see
> > the update and will continue working.
> >
> > If I put a memory barrier, I put some delay waiting for the CPU to flush
> > the write before continuing. I tried both options while checking packet
> > rate on couldn't see noticable difference in either case.
>
>
> makes sense.
>
> > > > > And given that, isn't dma_wmb() sufficient here?
> > > >
> > > > I agree that dma_wmb() is more appropriate here.
> > > >
> > > > >
> > > > >
> > > > > > ---
> > > > > > drivers/vdpa/mlx5/net/mlx5_vnet.c | 5 +++++
> > > > > > 1 file changed, 5 insertions(+)
> > > > > >
> > > > > > diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> > > > > > index 1f4089c6f9d7..295f46eea2a5 100644
> > > > > > --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
> > > > > > +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> > > > > > @@ -478,6 +478,11 @@ static int mlx5_vdpa_poll_one(struct mlx5_vdpa_cq *vcq)
> > > > > > static void mlx5_vdpa_handle_completions(struct mlx5_vdpa_virtqueue *mvq, int num)
> > > > > > {
> > > > > > mlx5_cq_set_ci(&mvq->cq.mcq);
> > > > > > +
> > > > > > + /* make sure CQ cosumer update is visible to the hardware before updating
> > > > > > + * RX doorbell record.
> > > > > > + */
> > > > > > + wmb();
> > > > > > rx_post(&mvq->vqqp, num);
> > > > > > if (mvq->event_cb.callback)
> > > > > > mvq->event_cb.callback(mvq->event_cb.private);
> > > > > > --
> > > > > > 2.27.0
> > > > >
> > >
>
Powered by blists - more mailing lists