[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACGkMEvhZJCidEf2b5-cppvnX4=JwhbVX_cT9x+LMKLh_N3NcQ@mail.gmail.com>
Date: Fri, 6 Dec 2024 10:31:55 +0800
From: Jason Wang <jasowang@...hat.com>
To: Shijith Thotton <sthotton@...vell.com>
Cc: virtualization@...ts.linux.dev, mst@...hat.com, dan.carpenter@...aro.org,
schalla@...vell.com, vattunuru@...vell.com, ndabilpuram@...vell.com,
jerinj@...vell.com, Xuan Zhuo <xuanzhuo@...ux.alibaba.com>,
Eugenio Pérez <eperezma@...hat.com>,
Satha Rao <skoteshwar@...vell.com>, open list <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2 1/4] vdpa/octeon_ep: enable support for multiple
interrupts per device
On Fri, Dec 6, 2024 at 9:25 AM Jason Wang <jasowang@...hat.com> wrote:
>
> On Thu, Nov 21, 2024 at 9:45 PM Shijith Thotton <sthotton@...vell.com> wrote:
> >
> > Updated the driver to utilize all the MSI-X interrupt vectors supported
> > by each OCTEON endpoint VF, instead of relying on a single vector.
> > Enabling more interrupts allows packets from multiple rings to be
> > distributed across multiple cores, improving parallelism and
> > performance.
> >
> > Signed-off-by: Shijith Thotton <sthotton@...vell.com>
> > ---
> > v1:
> > - https://lore.kernel.org/virtualization/20241120070508.789508-1-sthotton@marvell.com
> >
> > Changes in v2:
> > - Handle reset getting called twice.
> > - Use devm_kcalloc to allocate irq array.
> > - IRQ is never zero. Adjusted code accordingly.
> >
> > drivers/vdpa/octeon_ep/octep_vdpa.h | 10 +--
> > drivers/vdpa/octeon_ep/octep_vdpa_hw.c | 2 -
> > drivers/vdpa/octeon_ep/octep_vdpa_main.c | 90 ++++++++++++++++--------
> > 3 files changed, 64 insertions(+), 38 deletions(-)
> >
> > diff --git a/drivers/vdpa/octeon_ep/octep_vdpa.h b/drivers/vdpa/octeon_ep/octep_vdpa.h
> > index 046710ec4d42..2d4bb07f91b3 100644
> > --- a/drivers/vdpa/octeon_ep/octep_vdpa.h
> > +++ b/drivers/vdpa/octeon_ep/octep_vdpa.h
> > @@ -29,12 +29,12 @@
> > #define OCTEP_EPF_RINFO(x) (0x000209f0 | ((x) << 25))
> > #define OCTEP_VF_MBOX_DATA(x) (0x00010210 | ((x) << 17))
> > #define OCTEP_PF_MBOX_DATA(x) (0x00022000 | ((x) << 4))
> > -
> > -#define OCTEP_EPF_RINFO_RPVF(val) (((val) >> 32) & 0xF)
> > -#define OCTEP_EPF_RINFO_NVFS(val) (((val) >> 48) & 0x7F)
> > +#define OCTEP_VF_IN_CTRL(x) (0x00010000 | ((x) << 17))
> > +#define OCTEP_VF_IN_CTRL_RPVF(val) (((val) >> 48) & 0xF)
> >
> > #define OCTEP_FW_READY_SIGNATURE0 0xFEEDFEED
> > #define OCTEP_FW_READY_SIGNATURE1 0x3355ffaa
> > +#define OCTEP_MAX_CB_INTR 8
> >
> > enum octep_vdpa_dev_status {
> > OCTEP_VDPA_DEV_STATUS_INVALID,
> > @@ -50,7 +50,6 @@ struct octep_vring_info {
> > void __iomem *notify_addr;
> > u32 __iomem *cb_notify_addr;
> > phys_addr_t notify_pa;
> > - char msix_name[256];
> > };
> >
> > struct octep_hw {
> > @@ -68,7 +67,8 @@ struct octep_hw {
> > u64 features;
> > u16 nr_vring;
> > u32 config_size;
> > - int irq;
> > + int nb_irqs;
> > + int *irqs;
> > };
> >
> > u8 octep_hw_get_status(struct octep_hw *oct_hw);
> > diff --git a/drivers/vdpa/octeon_ep/octep_vdpa_hw.c b/drivers/vdpa/octeon_ep/octep_vdpa_hw.c
> > index 1d4767b33315..d5a599f87e18 100644
> > --- a/drivers/vdpa/octeon_ep/octep_vdpa_hw.c
> > +++ b/drivers/vdpa/octeon_ep/octep_vdpa_hw.c
> > @@ -495,8 +495,6 @@ int octep_hw_caps_read(struct octep_hw *oct_hw, struct pci_dev *pdev)
> > if (!oct_hw->vqs)
> > return -ENOMEM;
> >
> > - oct_hw->irq = -1;
> > -
> > dev_info(&pdev->dev, "Device features : %llx\n", oct_hw->features);
> > dev_info(&pdev->dev, "Maximum queues : %u\n", oct_hw->nr_vring);
> >
> > diff --git a/drivers/vdpa/octeon_ep/octep_vdpa_main.c b/drivers/vdpa/octeon_ep/octep_vdpa_main.c
> > index cd55b1aac151..e10cb26a3206 100644
> > --- a/drivers/vdpa/octeon_ep/octep_vdpa_main.c
> > +++ b/drivers/vdpa/octeon_ep/octep_vdpa_main.c
> > @@ -47,13 +47,30 @@ static struct octep_hw *vdpa_to_octep_hw(struct vdpa_device *vdpa_dev)
> > static irqreturn_t octep_vdpa_intr_handler(int irq, void *data)
> > {
> > struct octep_hw *oct_hw = data;
> > - int i;
> > + int i, ring_start, ring_stride;
> > +
> > + /* Each device has multiple interrupts (nb_irqs) shared among receive
> > + * rings (nr_vring). Device interrupts are mapped to specific receive
> > + * rings in a round-robin fashion. Only rings handling receive
> > + * operations require interrupts, and these are at even indices.
> > + *
> > + * For example, if nb_irqs = 8 and nr_vring = 64:
> > + * 0 -> 0, 16, 32, 48;
> > + * 1 -> 2, 18, 34, 50;
> > + * ...
> > + * 7 -> 14, 30, 46, 62;
> > + */
> > + ring_start = (irq - oct_hw->irqs[0]) * 2;
> > + ring_stride = oct_hw->nb_irqs * 2;
> >
> > - for (i = 0; i < oct_hw->nr_vring; i++) {
> > - if (oct_hw->vqs[i].cb.callback && ioread32(oct_hw->vqs[i].cb_notify_addr)) {
> > - /* Acknowledge the per queue notification to the device */
> > + for (i = ring_start; i < oct_hw->nr_vring; i += ring_stride) {
> > + if (ioread32(oct_hw->vqs[i].cb_notify_addr)) {
>
> Could oct_hw->vqs[i].cb_notify_addr change? If not, maybe we can cache
> it somewhere to avoid the read here.
Ok, it looks like the device reuse the notify addr somehow works like an ISR.
Thanks
Powered by blists - more mailing lists