[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACGkMEsvva4xWJpBLOzxmst=BzE7HRWuTpERVEt78oSGG_=5RQ@mail.gmail.com>
Date: Tue, 9 Apr 2024 11:53:00 +0800
From: Jason Wang <jasowang@...hat.com>
To: lyx634449800 <yuxue.liu@...uarmicro.com>
Cc: mst@...hat.com, angus.chen@...uarmicro.com, virtualization@...ts.linux.dev,
xuanzhuo@...ux.alibaba.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3] vp_vdpa: fix the method of calculating vectors
On Tue, Apr 9, 2024 at 9:49 AM lyx634449800 <yuxue.liu@...uarmicro.com> wrote:
>
> When there is a ctlq and it doesn't require interrupt
> callbacks,the original method of calculating vectors
> wastes hardware msi or msix resources as well as system
> IRQ resources.
>
> When conducting performance testing using testpmd in the
> guest os, it was found that the performance was lower compared
> to directly using vfio-pci to passthrough the device
>
> In scenarios where the virtio device in the guest os does
> not utilize interrupts, the vdpa driver still configures
> the hardware's msix vector. Therefore, the hardware still
> sends interrupts to the host os. Because of this unnecessary
> action by the hardware, hardware performance decreases, and
> it also affects the performance of the host os.
>
> Before modification:(interrupt mode)
> 32: 0 0 0 0 PCI-MSI 32768-edge vp-vdpa[0000:00:02.0]-0
> 33: 0 0 0 0 PCI-MSI 32769-edge vp-vdpa[0000:00:02.0]-1
> 34: 0 0 0 0 PCI-MSI 32770-edge vp-vdpa[0000:00:02.0]-2
> 35: 0 0 0 0 PCI-MSI 32771-edge vp-vdpa[0000:00:02.0]-config
>
> After modification:(interrupt mode)
> 32: 0 0 1 7 PCI-MSI 32768-edge vp-vdpa[0000:00:02.0]-0
> 33: 36 0 3 0 PCI-MSI 32769-edge vp-vdpa[0000:00:02.0]-1
> 34: 0 0 0 0 PCI-MSI 32770-edge vp-vdpa[0000:00:02.0]-config
>
> Before modification:(virtio pmd mode for guest os)
> 32: 0 0 0 0 PCI-MSI 32768-edge vp-vdpa[0000:00:02.0]-0
> 33: 0 0 0 0 PCI-MSI 32769-edge vp-vdpa[0000:00:02.0]-1
> 34: 0 0 0 0 PCI-MSI 32770-edge vp-vdpa[0000:00:02.0]-2
> 35: 0 0 0 0 PCI-MSI 32771-edge vp-vdpa[0000:00:02.0]-config
>
> After modification:(virtio pmd mode for guest os)
> 32: 0 0 0 0 PCI-MSI 32768-edge vp-vdpa[0000:00:02.0]-config
>
> To verify the use of the virtio PMD mode in the guest operating
> system, the following patch needs to be applied to QEMU:
> https://lore.kernel.org/all/20240408073311.2049-1-yuxue.liu@jaguarmicro.com
>
> Signed-off-by: lyx634449800 <yuxue.liu@...uarmicro.com>
> ---
>
> V3: delete unused variables and add validation records
> V2: fix when allocating IRQs, scan all queues
>
> drivers/vdpa/virtio_pci/vp_vdpa.c | 35 +++++++++++++++++++------------
> 1 file changed, 22 insertions(+), 13 deletions(-)
>
> diff --git a/drivers/vdpa/virtio_pci/vp_vdpa.c b/drivers/vdpa/virtio_pci/vp_vdpa.c
> index df5f4a3bccb5..cd3aeb3b8f21 100644
> --- a/drivers/vdpa/virtio_pci/vp_vdpa.c
> +++ b/drivers/vdpa/virtio_pci/vp_vdpa.c
> @@ -160,22 +160,31 @@ static int vp_vdpa_request_irq(struct vp_vdpa *vp_vdpa)
> struct pci_dev *pdev = mdev->pci_dev;
> int i, ret, irq;
> int queues = vp_vdpa->queues;
> - int vectors = queues + 1;
> + int msix_vec, allocated_vectors = 0;
>
> - ret = pci_alloc_irq_vectors(pdev, vectors, vectors, PCI_IRQ_MSIX);
> - if (ret != vectors) {
> + for (i = 0; i < queues; i++) {
> + if (vp_vdpa->vring[i].cb.callback)
> + allocated_vectors++;
> + }
> + allocated_vectors = allocated_vectors + 1;
> +
> + ret = pci_alloc_irq_vectors(pdev, allocated_vectors, allocated_vectors,
> + PCI_IRQ_MSIX);
> + if (ret != allocated_vectors) {
> dev_err(&pdev->dev,
> "vp_vdpa: fail to allocate irq vectors want %d but %d\n",
> - vectors, ret);
> + allocated_vectors, ret);
> return ret;
> }
> -
> - vp_vdpa->vectors = vectors;
> + vp_vdpa->vectors = allocated_vectors;
>
> for (i = 0; i < queues; i++) {
> + if (!vp_vdpa->vring[i].cb.callback)
> + continue;
> +
> snprintf(vp_vdpa->vring[i].msix_name, VP_VDPA_NAME_SIZE,
> "vp-vdpa[%s]-%d\n", pci_name(pdev), i);
> - irq = pci_irq_vector(pdev, i);
> + irq = pci_irq_vector(pdev, msix_vec);
> ret = devm_request_irq(&pdev->dev, irq,
> vp_vdpa_vq_handler,
> 0, vp_vdpa->vring[i].msix_name,
> @@ -185,23 +194,23 @@ static int vp_vdpa_request_irq(struct vp_vdpa *vp_vdpa)
> "vp_vdpa: fail to request irq for vq %d\n", i);
> goto err;
> }
> - vp_modern_queue_vector(mdev, i, i);
> + vp_modern_queue_vector(mdev, i, msix_vec);
> vp_vdpa->vring[i].irq = irq;
> + msix_vec++;
> }
>
> snprintf(vp_vdpa->msix_name, VP_VDPA_NAME_SIZE, "vp-vdpa[%s]-config\n",
> - pci_name(pdev));
> - irq = pci_irq_vector(pdev, queues);
> + pci_name(pdev));
> + irq = pci_irq_vector(pdev, msix_vec);
> ret = devm_request_irq(&pdev->dev, irq, vp_vdpa_config_handler, 0,
> vp_vdpa->msix_name, vp_vdpa);
> if (ret) {
> dev_err(&pdev->dev,
> - "vp_vdpa: fail to request irq for vq %d\n", i);
> + "vp_vdpa: fail to request irq for config\n");
> goto err;
> }
> - vp_modern_config_vector(mdev, queues);
> + vp_modern_config_vector(mdev, msix_vec);
> vp_vdpa->config_irq = irq;
> -
Unnecessary changes.
Others look good.
Acked-by: Jason Wang <jasowang@...hat.com>
Thanks
> return 0;
> err:
> vp_vdpa_free_irq(vp_vdpa);
> --
> 2.43.0
>
Powered by blists - more mailing lists