lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID:
 <SEYPR06MB67560F9CEE5B24525C06D90AEC112@SEYPR06MB6756.apcprd06.prod.outlook.com>
Date: Tue, 23 Apr 2024 09:25:56 +0000
From: Gavin Liu <gavin.liu@...uarmicro.com>
To: Angus Chen <angus.chen@...uarmicro.com>, "Michael S. Tsirkin"
	<mst@...hat.com>
CC: "jasowang@...hat.com" <jasowang@...hat.com>,
	"virtualization@...ts.linux.dev" <virtualization@...ts.linux.dev>,
	"xuanzhuo@...ux.alibaba.com" <xuanzhuo@...ux.alibaba.com>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>, Heng Qi
	<hengqi@...ux.alibaba.com>
Subject:
 回复: 回复: [PATCH v5] vp_vdpa: don't allocate unused msix vectors

> > When there is a ctlq and it doesn't require interrupt callbacks,the 
> > original method of calculating vectors wastes hardware msi or msix 
> > resources as well as system IRQ resources.
> >
> > When conducting performance testing using testpmd in the guest os, 
> > it was found that the performance was lower compared to directly 
> > using vfio-pci to passthrough the device
> >
> > In scenarios where the virtio device in the guest os does not 
> > utilize interrupts, the vdpa driver still configures the hardware's 
> > msix vector. Therefore, the hardware still sends interrupts to the host os.
>
> >I just have a question on this part. How come hardware sends interrupts does not guest driver disable them?
>
>    1:Assuming the guest OS's Virtio device is using PMD mode, QEMU sets the call fd to -1
>    2:On the host side, the vhost_vdpa program will set vp_vdpa->vring[i].cb.callback to invalid
>    3:Before the modification, the vp_vdpa_request_irq function does not check whether
>       vp_vdpa->vring[i].cb.callback is valid. Instead, it enables the hardware's MSIX
>         interrupts based on the number of queues of the device
>

> So MSIX is enabled but why would it trigger? virtio PMD in poll mode presumably suppresses interrupts after all.

I didn't express the test model clearly.  The testing model is as follows:
    ----testpmd----               ----testpmd---
          ^                        ^             
    	  |                        |              
    	  |                        |
    	  |                        |
    	  v                        v
      ----vfio pci---               ---vfio pci---
     ----pci device ---            --pci device--
     ----guest os -----           ----guest os ------
							   
							   
    ---virtio device--            ---vfio device--
    ------qemu-------            ------qemu-------
        ^                          ^
        |                          |
        |                          |
        |                          |
        v                          v
    ----vhost_vdpa----           ----vfio pci----                      
    ------------------------host os--------------------------           
	----------------------hw----------------------------
      VF1                          VF2
After the hardware completes DMA, it will still send an interrupt to the host OS.




 -----Original Message-----
From: Angus Chen angus.chen@...uarmicro.com
Sent: April 23, 2024 16:43
To: Michael S. Tsirkin mst@...hat.com; Gavin Liu gavin.liu@...uarmicro.com
Cc: jasowang@...hat.com; virtualization@...ts.linux.dev; xuanzhuo@...ux.alibaba.com; linux-kernel@...r.kernel.org; Heng Qi hengqi@...ux.alibaba.com
Subject: RE: Reply: [PATCH v5] vp_vdpa: don't allocate unused msix vectors


Hi mst.

> -----Original Message-----
> From: Michael S. Tsirkin <mst@...hat.com>
> Sent: Tuesday, April 23, 2024 4:35 PM
> To: Gavin Liu <gavin.liu@...uarmicro.com>
> Cc: jasowang@...hat.com; Angus Chen <angus.chen@...uarmicro.com>; 
> virtualization@...ts.linux.dev; xuanzhuo@...ux.alibaba.com; 
> linux-kernel@...r.kernel.org; Heng Qi <hengqi@...ux.alibaba.com>
> Subject: Re: 回复: [PATCH v5] vp_vdpa: don't allocate unused msix 
> vectors
> 
> On Tue, Apr 23, 2024 at 01:39:17AM +0000, Gavin Liu wrote:
> > On Wed, Apr 10, 2024 at 11:30:20AM +0800, lyx634449800 wrote:
> > > From: Yuxue Liu <yuxue.liu@...uarmicro.com>
> > >
> > > When there is a ctlq and it doesn't require interrupt 
> > > callbacks,the original method of calculating vectors wastes 
> > > hardware msi or msix resources as well as system IRQ resources.
> > >
> > > When conducting performance testing using testpmd in the guest os, 
> > > it was found that the performance was lower compared to directly 
> > > using vfio-pci to passthrough the device
> > >
> > > In scenarios where the virtio device in the guest os does not 
> > > utilize interrupts, the vdpa driver still configures the 
> > > hardware's msix vector. Therefore, the hardware still sends interrupts to the host os.
> >
> > >I just have a question on this part. How come hardware sends 
> > >interrupts does
> not guest driver disable them?
> >
> >    1:Assuming the guest OS's Virtio device is using PMD mode, QEMU 
> > sets
> the call fd to -1
> >    2:On the host side, the vhost_vdpa program will set
> vp_vdpa->vring[i].cb.callback to invalid
> >    3:Before the modification, the vp_vdpa_request_irq function does 
> > not
> check whether
> >       vp_vdpa->vring[i].cb.callback is valid. Instead, it enables 
> > the
> hardware's MSIX
> > 	  interrupts based on the number of queues of the device
> >
> 
> So MSIX is enabled but why would it trigger? virtio PMD in poll mode 
> presumably suppresses interrupts after all.
Virtio pmd is in the guest,but in host side,the msix is enabled,then the device will triger Interrupt normally. I analysed this bug before,and I think gavin is right.
Did I make it clear?
> 
> >
> >
> > ----- Original Message -----
> > From: Michael S. Tsirkin mst@...hat.com
> > Sent: April 22, 2024 20:09
> > To: Gavin Liu gavin.liu@...uarmicro.com
> > Cc: jasowang@...hat.com; Angus Chen angus.chen@...uarmicro.com;
> virtualization@...ts.linux.dev; xuanzhuo@...ux.alibaba.com; 
> linux-kernel@...r.kernel.org; Heng Qi hengqi@...ux.alibaba.com
> > Subject: Re: [PATCH v5] vp_vdpa: don't allocate unused msix vectors
> >
> >
> >
> > External Mail: This email originated from OUTSIDE of the organization!
> > Do not click links, open attachments or provide ANY information 
> > unless you
> recognize the sender and know the content is safe.
> >
> >
> > On Wed, Apr 10, 2024 at 11:30:20AM +0800, lyx634449800 wrote:
> > > From: Yuxue Liu <yuxue.liu@...uarmicro.com>
> > >
> > > When there is a ctlq and it doesn't require interrupt 
> > > callbacks,the original method of calculating vectors wastes 
> > > hardware msi or msix resources as well as system IRQ resources.
> > >
> > > When conducting performance testing using testpmd in the guest os, 
> > > it was found that the performance was lower compared to directly 
> > > using vfio-pci to passthrough the device
> > >
> > > In scenarios where the virtio device in the guest os does not 
> > > utilize interrupts, the vdpa driver still configures the 
> > > hardware's msix vector. Therefore, the hardware still sends interrupts to the host os.
> >
> > I just have a question on this part. How come hardware sends 
> > interrupts does
> not guest driver disable them?
> >
> > > Because of this unnecessary
> > > action by the hardware, hardware performance decreases, and it 
> > > also affects the performance of the host os.
> > >
> > > Before modification:(interrupt mode)
> > >  32:  0   0  0  0 PCI-MSI 32768-edge    vp-vdpa[0000:00:02.0]-0
> > >  33:  0   0  0  0 PCI-MSI 32769-edge    vp-vdpa[0000:00:02.0]-1
> > >  34:  0   0  0  0 PCI-MSI 32770-edge    vp-vdpa[0000:00:02.0]-2
> > >  35:  0   0  0  0 PCI-MSI 32771-edge    vp-vdpa[0000:00:02.0]-config
> > >
> > > After modification:(interrupt mode)
> > >  32:  0  0  1  7   PCI-MSI 32768-edge  vp-vdpa[0000:00:02.0]-0
> > >  33: 36  0  3  0   PCI-MSI 32769-edge  vp-vdpa[0000:00:02.0]-1
> > >  34:  0  0  0  0   PCI-MSI 32770-edge  vp-vdpa[0000:00:02.0]-config
> > >
> > > Before modification:(virtio pmd mode for guest os)
> > >  32:  0   0  0  0 PCI-MSI 32768-edge    vp-vdpa[0000:00:02.0]-0
> > >  33:  0   0  0  0 PCI-MSI 32769-edge    vp-vdpa[0000:00:02.0]-1
> > >  34:  0   0  0  0 PCI-MSI 32770-edge    vp-vdpa[0000:00:02.0]-2
> > >  35:  0   0  0  0 PCI-MSI 32771-edge    vp-vdpa[0000:00:02.0]-config
> > >
> > > After modification:(virtio pmd mode for guest os)
> > >  32: 0  0  0   0   PCI-MSI 32768-edge   vp-vdpa[0000:00:02.0]-config
> > >
> > > To verify the use of the virtio PMD mode in the guest operating 
> > > system, the following patch needs to be applied to QEMU:
> > > https://lore.kernel.org/all/20240408073311.2049-1-yuxue.liu@jaguar
> > > micr
> > > o.com
> > >
> > > Signed-off-by: Yuxue Liu <yuxue.liu@...uarmicro.com>
> > > Acked-by: Jason Wang <jasowang@...hat.com>
> > > Reviewed-by: Heng Qi <hengqi@...ux.alibaba.com>
> > > ---
> > > V5: modify the description of the printout when an exception 
> > > occurs
> > > V4: update the title and assign values to uninitialized variables
> > > V3: delete unused variables and add validation records
> > > V2: fix when allocating IRQs, scan all queues
> > >
> > >  drivers/vdpa/virtio_pci/vp_vdpa.c | 22 ++++++++++++++++------
> > >  1 file changed, 16 insertions(+), 6 deletions(-)
> > >
> > > diff --git a/drivers/vdpa/virtio_pci/vp_vdpa.c
> > > b/drivers/vdpa/virtio_pci/vp_vdpa.c
> > > index df5f4a3bccb5..8de0224e9ec2 100644
> > > --- a/drivers/vdpa/virtio_pci/vp_vdpa.c
> > > +++ b/drivers/vdpa/virtio_pci/vp_vdpa.c
> > > @@ -160,7 +160,13 @@ static int vp_vdpa_request_irq(struct vp_vdpa
> *vp_vdpa)
> > >       struct pci_dev *pdev = mdev->pci_dev;
> > >       int i, ret, irq;
> > >       int queues = vp_vdpa->queues;
> > > -     int vectors = queues + 1;
> > > +     int vectors = 1;
> > > +     int msix_vec = 0;
> > > +
> > > +     for (i = 0; i < queues; i++) {
> > > +             if (vp_vdpa->vring[i].cb.callback)
> > > +                     vectors++;
> > > +     }
> > >
> > >       ret = pci_alloc_irq_vectors(pdev, vectors, vectors, PCI_IRQ_MSIX);
> > >       if (ret != vectors) {
> > > @@ -173,9 +179,12 @@ static int vp_vdpa_request_irq(struct vp_vdpa
> *vp_vdpa)
> > >       vp_vdpa->vectors = vectors;
> > >
> > >       for (i = 0; i < queues; i++) {
> > > +             if (!vp_vdpa->vring[i].cb.callback)
> > > +                     continue;
> > > +
> > >               snprintf(vp_vdpa->vring[i].msix_name,
> VP_VDPA_NAME_SIZE,
> > >                       "vp-vdpa[%s]-%d\n", pci_name(pdev), i);
> > > -             irq = pci_irq_vector(pdev, i);
> > > +             irq = pci_irq_vector(pdev, msix_vec);
> > >               ret = devm_request_irq(&pdev->dev, irq,
> > >                                      vp_vdpa_vq_handler,
> > >                                      0,
> vp_vdpa->vring[i].msix_name,
> > > @@ -185,21 +194,22 @@ static int vp_vdpa_request_irq(struct 
> > > vp_vdpa
> *vp_vdpa)
> > >                               "vp_vdpa: fail to request irq for
> vq %d\n", i);
> > >                       goto err;
> > >               }
> > > -             vp_modern_queue_vector(mdev, i, i);
> > > +             vp_modern_queue_vector(mdev, i, msix_vec);
> > >               vp_vdpa->vring[i].irq = irq;
> > > +             msix_vec++;
> > >       }
> > >
> > >       snprintf(vp_vdpa->msix_name, VP_VDPA_NAME_SIZE,
> "vp-vdpa[%s]-config\n",
> > >                pci_name(pdev));
> > > -     irq = pci_irq_vector(pdev, queues);
> > > +     irq = pci_irq_vector(pdev, msix_vec);
> > >       ret = devm_request_irq(&pdev->dev, irq, 
> > > vp_vdpa_config_handler,
> 0,
> > >                              vp_vdpa->msix_name, vp_vdpa);
> > >       if (ret) {
> > >               dev_err(&pdev->dev,
> > > -                     "vp_vdpa: fail to request irq for vq %d\n", i);
> > > +                     "vp_vdpa: fail to request irq for config: 
> > > + %d\n", ret);
> > >                       goto err;
> > >       }
> > > -     vp_modern_config_vector(mdev, queues);
> > > +     vp_modern_config_vector(mdev, msix_vec);
> > >       vp_vdpa->config_irq = irq;
> > >
> > >       return 0;
> > > --
> > > 2.43.0
> >

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ