[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <bd16246e41fba73e84ceeec5dcc33fcf7c224c5c.camel@nvidia.com>
Date: Wed, 26 Jun 2024 09:28:15 +0000
From: Dragos Tatulea <dtatulea@...dia.com>
To: "xuanzhuo@...ux.alibaba.com" <xuanzhuo@...ux.alibaba.com>, Tariq Toukan
<tariqt@...dia.com>, "eperezma@...hat.com" <eperezma@...hat.com>,
"yanjun.zhu@...ux.dev" <yanjun.zhu@...ux.dev>, "si-wei.liu@...cle.com"
<si-wei.liu@...cle.com>, "mst@...hat.com" <mst@...hat.com>,
"jasowang@...hat.com" <jasowang@...hat.com>, Saeed Mahameed
<saeedm@...dia.com>, "leon@...nel.org" <leon@...nel.org>
CC: Cosmin Ratiu <cratiu@...dia.com>, "linux-kernel@...r.kernel.org"
<linux-kernel@...r.kernel.org>, "virtualization@...ts.linux.dev"
<virtualization@...ts.linux.dev>, "linux-rdma@...r.kernel.org"
<linux-rdma@...r.kernel.org>, "netdev@...r.kernel.org"
<netdev@...r.kernel.org>
Subject: Re: [PATCH vhost 18/23] vdpa/mlx5: Forward error in suspend/resume
device
On Sun, 2024-06-23 at 19:19 +0800, Zhu Yanjun wrote:
> 在 2024/6/17 23:07, Dragos Tatulea 写道:
> > Start using the suspend/resume_vq() error return codes previously added.
> >
> > Signed-off-by: Dragos Tatulea <dtatulea@...dia.com>
> > Reviewed-by: Cosmin Ratiu <cratiu@...dia.com>
> > ---
> > drivers/vdpa/mlx5/net/mlx5_vnet.c | 12 ++++++++----
> > 1 file changed, 8 insertions(+), 4 deletions(-)
> >
> > diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> > index f5d5b25cdb01..0e1c1b7ff297 100644
> > --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
> > +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> > @@ -3436,22 +3436,25 @@ static int mlx5_vdpa_suspend(struct vdpa_device *vdev)
> > {
> > struct mlx5_vdpa_dev *mvdev = to_mvdev(vdev);
> > struct mlx5_vdpa_net *ndev = to_mlx5_vdpa_ndev(mvdev);
> > + int err;
>
> Reverse Christmas Tree?
Would have fixed the code if it would have been part of the patch. But it isn't.
>
> Reviewed-by: Zhu Yanjun <yanjun.zhu@...ux.dev>
>
Thanks!
> Zhu Yanjun
> >
> > mlx5_vdpa_info(mvdev, "suspending device\n");
> >
> > down_write(&ndev->reslock);
> > unregister_link_notifier(ndev);
> > - suspend_vqs(ndev);
> > + err = suspend_vqs(ndev);
> > mlx5_vdpa_cvq_suspend(mvdev);
> > mvdev->suspended = true;
> > up_write(&ndev->reslock);
> > - return 0;
> > +
> > + return err;
> > }
> >
> > static int mlx5_vdpa_resume(struct vdpa_device *vdev)
> > {
> > struct mlx5_vdpa_dev *mvdev = to_mvdev(vdev);
> > struct mlx5_vdpa_net *ndev;
> > + int err;
> >
> > ndev = to_mlx5_vdpa_ndev(mvdev);
> >
> > @@ -3459,10 +3462,11 @@ static int mlx5_vdpa_resume(struct vdpa_device *vdev)
> >
> > down_write(&ndev->reslock);
> > mvdev->suspended = false;
> > - resume_vqs(ndev);
> > + err = resume_vqs(ndev);
> > register_link_notifier(ndev);
> > up_write(&ndev->reslock);
> > - return 0;
> > +
> > + return err;
> > }
> >
> > static int mlx5_set_group_asid(struct vdpa_device *vdev, u32 group,
> >
>
Powered by blists - more mailing lists