[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACGkMEuAtTJLbPJeJ2-6W605zrTXAkLm2Q86g6pQepStwxoO1w@mail.gmail.com>
Date: Wed, 15 Sep 2021 12:13:31 +0800
From: Jason Wang <jasowang@...hat.com>
To: Wu Zongyong <wuzongyong@...ux.alibaba.com>
Cc: virtualization <virtualization@...ts.linux-foundation.org>,
linux-kernel <linux-kernel@...r.kernel.org>,
mst <mst@...hat.com>, wei.yang1@...ux.alibaba.com
Subject: Re: [PATCH v2 3/5] vp_vdpa: add vq irq offloading support
On Wed, Sep 15, 2021 at 11:31 AM Wu Zongyong
<wuzongyong@...ux.alibaba.com> wrote:
>
> On Wed, Sep 15, 2021 at 11:16:03AM +0800, Jason Wang wrote:
> > On Tue, Sep 14, 2021 at 8:25 PM Wu Zongyong
> > <wuzongyong@...ux.alibaba.com> wrote:
> > >
> > > This patch implements the get_vq_irq() callback for virtio pci devices
> > > to allow irq offloading.
> > >
> > > Signed-off-by: Wu Zongyong <wuzongyong@...ux.alibaba.com>
> >
> > Acked-by: Jason Wang <jasowang@...hat.com>
> >
> > (btw, I think I've acked this but it seems lost).
> Yes, but this patch is a little different with the previous one.
I see, then it's better to mention this after "---" like
---
change since v1:
- xyz
---
or in the cover letter.
>
> And should I not send the patch again if one of the previous version
> patch series have been acked by someone?
No, you need to resend the whole series.
Thanks
> It's the first time for me to
> send patches to kernel community.
> >
> > Thanks
> >
> > > ---
> > > drivers/vdpa/virtio_pci/vp_vdpa.c | 12 ++++++++++++
> > > 1 file changed, 12 insertions(+)
> > >
> > > diff --git a/drivers/vdpa/virtio_pci/vp_vdpa.c b/drivers/vdpa/virtio_pci/vp_vdpa.c
> > > index 5bcd00246d2e..e3ff7875e123 100644
> > > --- a/drivers/vdpa/virtio_pci/vp_vdpa.c
> > > +++ b/drivers/vdpa/virtio_pci/vp_vdpa.c
> > > @@ -76,6 +76,17 @@ static u8 vp_vdpa_get_status(struct vdpa_device *vdpa)
> > > return vp_modern_get_status(mdev);
> > > }
> > >
> > > +static int vp_vdpa_get_vq_irq(struct vdpa_device *vdpa, u16 idx)
> > > +{
> > > + struct vp_vdpa *vp_vdpa = vdpa_to_vp(vdpa);
> > > + int irq = vp_vdpa->vring[idx].irq;
> > > +
> > > + if (irq == VIRTIO_MSI_NO_VECTOR)
> > > + return -EINVAL;
> > > +
> > > + return irq;
> > > +}
> > > +
> > > static void vp_vdpa_free_irq(struct vp_vdpa *vp_vdpa)
> > > {
> > > struct virtio_pci_modern_device *mdev = &vp_vdpa->mdev;
> > > @@ -427,6 +438,7 @@ static const struct vdpa_config_ops vp_vdpa_ops = {
> > > .get_config = vp_vdpa_get_config,
> > > .set_config = vp_vdpa_set_config,
> > > .set_config_cb = vp_vdpa_set_config_cb,
> > > + .get_vq_irq = vp_vdpa_get_vq_irq,
> > > };
> > >
> > > static void vp_vdpa_free_irq_vectors(void *data)
> > > --
> > > 2.31.1
> > >
>
Powered by blists - more mailing lists