[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a39f57190a46497e816eefa6b649b583@huawei.com>
Date: Thu, 19 Dec 2024 10:02:05 +0000
From: Shameerali Kolothum Thodi <shameerali.kolothum.thodi@...wei.com>
To: liulongfang <liulongfang@...wei.com>, "alex.williamson@...hat.com"
<alex.williamson@...hat.com>, "jgg@...dia.com" <jgg@...dia.com>, "Jonathan
Cameron" <jonathan.cameron@...wei.com>
CC: "kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linuxarm@...neuler.org" <linuxarm@...neuler.org>
Subject: RE: [PATCH v2 5/5] hisi_acc_vfio_pci: bugfix live migration function
without VF device driver
> -----Original Message-----
> From: liulongfang <liulongfang@...wei.com>
> Sent: Thursday, December 19, 2024 9:18 AM
> To: alex.williamson@...hat.com; jgg@...dia.com; Shameerali Kolothum
> Thodi <shameerali.kolothum.thodi@...wei.com>; Jonathan Cameron
> <jonathan.cameron@...wei.com>
> Cc: kvm@...r.kernel.org; linux-kernel@...r.kernel.org;
> linuxarm@...neuler.org; liulongfang <liulongfang@...wei.com>
> Subject: [PATCH v2 5/5] hisi_acc_vfio_pci: bugfix live migration function
> without VF device driver
>
> If the driver of the VF device is not loaded in the Guest OS,
> then perform device data migration. The migrated data address will
> be NULL.
> The live migration recovery operation on the destination side will
> access a null address value, which will cause access errors.
>
> Therefore, live migration of VMs without added VF device drivers
> does not require device data migration.
> In addition, when the queue address data obtained by the destination
> is empty, device queue recovery processing will not be performed.
>
> Signed-off-by: Longfang Liu <liulongfang@...wei.com>
Why this doesn't need a Fixes tag?
> ---
> drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.c | 18 ++++++++++++++++++
> 1 file changed, 18 insertions(+)
>
> diff --git a/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.c
> b/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.c
> index 8d9e07ebf4fd..9a5f7e9bc695 100644
> --- a/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.c
> +++ b/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.c
> @@ -436,6 +436,7 @@ static int vf_qm_get_match_data(struct
> hisi_acc_vf_core_device *hisi_acc_vdev,
> struct acc_vf_data *vf_data)
> {
> struct hisi_qm *pf_qm = hisi_acc_vdev->pf_qm;
> + struct hisi_qm *vf_qm = &hisi_acc_vdev->vf_qm;
> struct device *dev = &pf_qm->pdev->dev;
> int vf_id = hisi_acc_vdev->vf_id;
> int ret;
> @@ -460,6 +461,13 @@ static int vf_qm_get_match_data(struct
> hisi_acc_vf_core_device *hisi_acc_vdev,
> return ret;
> }
>
> + /* Get VF driver insmod state */
> + ret = qm_read_regs(vf_qm, QM_VF_STATE, &vf_data->vf_qm_state,
> 1);
> + if (ret) {
> + dev_err(dev, "failed to read QM_VF_STATE!\n");
> + return ret;
> + }
> +
> return 0;
> }
>
> @@ -499,6 +507,12 @@ static int vf_qm_load_data(struct
> hisi_acc_vf_core_device *hisi_acc_vdev,
> qm->qp_base = vf_data->qp_base;
> qm->qp_num = vf_data->qp_num;
>
> + if (!vf_data->eqe_dma || !vf_data->aeqe_dma ||
> + !vf_data->sqc_dma || !vf_data->cqc_dma) {
> + dev_err(dev, "resume dma addr is NULL!\n");
> + return -EINVAL;
> + }
> +
So this is to cover the corner case where the Guest has loaded the driver
(QM_READY set) but not configured the DMA address? When this will happen?
I thought we are setting QM_READY in guest after all configurations.
Thanks,
Shameer
Powered by blists - more mailing lists