[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210530082646.GA120333@mtl-vdi-166.wap.labs.mlnx>
Date: Sun, 30 May 2021 11:26:46 +0300
From: Eli Cohen <elic@...dia.com>
To: "Michael S. Tsirkin" <mst@...hat.com>
CC: <jasowang@...hat.com>, <virtualization@...ts.linux-foundation.org>,
<linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 1/2] vdpa/mlx5: Fix umem sizes assignments on VQ create
On Sun, May 30, 2021 at 04:19:01AM -0400, Michael S. Tsirkin wrote:
> On Sun, May 30, 2021 at 11:15:36AM +0300, Eli Cohen wrote:
> > On Sun, May 30, 2021 at 04:05:16AM -0400, Michael S. Tsirkin wrote:
> > > On Sun, May 30, 2021 at 09:31:28AM +0300, Eli Cohen wrote:
> > > > Fix copy paste bug assigning umem1 size to umem2 and umem3.
> > > >
> > > > Fixes: 1a86b377aa21 ("vdpa/mlx5: Add VDPA driver for supported mlx5 devices")
> > > > Signed-off-by: Eli Cohen <elic@...dia.com>
> > >
> > > could you clarify the impact of the bug please?
> > >
> >
> > I leads to firmware failure to create the virtqueue resource when you
> > try to use a 1:1 mapping MR. This kind of usage is presented in the
> > virtio_vdpa support I sent earlier.
>
> OK pls include this info in the commit log.
OK
> And is 1:1 the only case where
> sizes differ? Is it true that in other cases sizes are all the same?
>
The sizes are calculated based on firmware published paramters and a
formula provided by the spec. They do differ but it so happened that
size1 was larger than size2 and size3 so we did not see failures till
now.
> > > > ---
> > > > drivers/vdpa/mlx5/net/mlx5_vnet.c | 4 ++--
> > > > 1 file changed, 2 insertions(+), 2 deletions(-)
> > > >
> > > > diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> > > > index 189e4385df40..53312f0460ad 100644
> > > > --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
> > > > +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> > > > @@ -828,9 +828,9 @@ static int create_virtqueue(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtque
> > > > MLX5_SET(virtio_q, vq_ctx, umem_1_id, mvq->umem1.id);
> > > > MLX5_SET(virtio_q, vq_ctx, umem_1_size, mvq->umem1.size);
> > > > MLX5_SET(virtio_q, vq_ctx, umem_2_id, mvq->umem2.id);
> > > > - MLX5_SET(virtio_q, vq_ctx, umem_2_size, mvq->umem1.size);
> > > > + MLX5_SET(virtio_q, vq_ctx, umem_2_size, mvq->umem2.size);
> > > > MLX5_SET(virtio_q, vq_ctx, umem_3_id, mvq->umem3.id);
> > > > - MLX5_SET(virtio_q, vq_ctx, umem_3_size, mvq->umem1.size);
> > > > + MLX5_SET(virtio_q, vq_ctx, umem_3_size, mvq->umem3.size);
> > > > MLX5_SET(virtio_q, vq_ctx, pd, ndev->mvdev.res.pdn);
> > > > if (MLX5_CAP_DEV_VDPA_EMULATION(ndev->mvdev.mdev, eth_frame_offload_type))
> > > > MLX5_SET(virtio_q, vq_ctx, virtio_version_1_0, 1);
> > > > --
> > > > 2.31.1
> > >
>
Powered by blists - more mailing lists