lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201006064251.GA245562@mtl-vdi-166.wap.labs.mlnx>
Date:   Tue, 6 Oct 2020 09:42:51 +0300
From:   Eli Cohen <elic@...dia.com>
To:     "Michael S. Tsirkin" <mst@...hat.com>
CC:     Si-Wei Liu <siwliu.kernel@...il.com>, <jasowang@...hat.com>,
        <netdev@...r.kernel.org>, <joao.m.martins@...cle.com>,
        <boris.ostrovsky@...cle.com>, <linux-kernel@...r.kernel.org>,
        <virtualization@...ts.linux-foundation.org>,
        Si-Wei Liu <si-wei.liu@...cle.com>
Subject: Re: [PATCH] vdpa/mlx5: should keep avail_index despite device status

On Tue, Oct 06, 2020 at 02:22:15AM -0400, Michael S. Tsirkin wrote:

Acked-by: Eli Cohen <elic@...dia.com>

> On Fri, Oct 02, 2020 at 01:17:00PM -0700, Si-Wei Liu wrote:
> > + Eli.
> > 
> > On Thu, Oct 1, 2020 at 2:02 PM Si-Wei Liu <si-wei.liu@...cle.com> wrote:
> > >
> > > A VM with mlx5 vDPA has below warnings while being reset:
> > >
> > > vhost VQ 0 ring restore failed: -1: Resource temporarily unavailable (11)
> > > vhost VQ 1 ring restore failed: -1: Resource temporarily unavailable (11)
> > >
> > > We should allow userspace emulating the virtio device be
> > > able to get to vq's avail_index, regardless of vDPA device
> > > status. Save the index that was last seen when virtq was
> > > stopped, so that userspace doesn't complain.
> > >
> > > Signed-off-by: Si-Wei Liu <si-wei.liu@...cle.com>
> 
> Eli can you review this pls? I need to send a pull request to Linux by
> tomorrow - do we want to include this?
> 
> > > ---
> > >  drivers/vdpa/mlx5/net/mlx5_vnet.c | 20 ++++++++++++++------
> > >  1 file changed, 14 insertions(+), 6 deletions(-)
> > >
> > > diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> > > index 70676a6..74264e59 100644
> > > --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
> > > +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> > > @@ -1133,15 +1133,17 @@ static void suspend_vq(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtqueue *m
> > >         if (!mvq->initialized)
> > >                 return;
> > >
> > > -       if (query_virtqueue(ndev, mvq, &attr)) {
> > > -               mlx5_vdpa_warn(&ndev->mvdev, "failed to query virtqueue\n");
> > > -               return;
> > > -       }
> > >         if (mvq->fw_state != MLX5_VIRTIO_NET_Q_OBJECT_STATE_RDY)
> > >                 return;
> > >
> > >         if (modify_virtqueue(ndev, mvq, MLX5_VIRTIO_NET_Q_OBJECT_STATE_SUSPEND))
> > >                 mlx5_vdpa_warn(&ndev->mvdev, "modify to suspend failed\n");
> > > +
> > > +       if (query_virtqueue(ndev, mvq, &attr)) {
> > > +               mlx5_vdpa_warn(&ndev->mvdev, "failed to query virtqueue\n");
> > > +               return;
> > > +       }
> > > +       mvq->avail_idx = attr.available_index;
> > >  }
> > >
> > >  static void suspend_vqs(struct mlx5_vdpa_net *ndev)
> > > @@ -1411,8 +1413,14 @@ static int mlx5_vdpa_get_vq_state(struct vdpa_device *vdev, u16 idx, struct vdpa
> > >         struct mlx5_virtq_attr attr;
> > >         int err;
> > >
> > > -       if (!mvq->initialized)
> > > -               return -EAGAIN;
> > > +       /* If the virtq object was destroyed, use the value saved at
> > > +        * the last minute of suspend_vq. This caters for userspace
> > > +        * that cares about emulating the index after vq is stopped.
> > > +        */
> > > +       if (!mvq->initialized) {
> > > +               state->avail_index = mvq->avail_idx;
> > > +               return 0;
> > > +       }
> > >
> > >         err = query_virtqueue(ndev, mvq, &attr);
> > >         if (err) {
> > > --
> > > 1.8.3.1
> > >
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ