[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAH2o1u7uAuXsD6+6Dvam4kzQuUj8s98G0sR26_-q31wvSUYZNA@mail.gmail.com>
Date: Thu, 20 Jul 2023 11:00:10 -0700
From: Tomasz Jeznach <tjeznach@...osinc.com>
To: Nick Kossifidis <mick@....forth.gr>
Cc: Joerg Roedel <joro@...tes.org>, Will Deacon <will@...nel.org>,
Robin Murphy <robin.murphy@....com>,
Paul Walmsley <paul.walmsley@...ive.com>,
Palmer Dabbelt <palmer@...belt.com>,
Albert Ou <aou@...s.berkeley.edu>,
Anup Patel <apatel@...tanamicro.com>,
Sunil V L <sunilvl@...tanamicro.com>,
Sebastien Boeuf <seb@...osinc.com>, iommu@...ts.linux.dev,
linux-riscv@...ts.infradead.org, linux-kernel@...r.kernel.org,
linux@...osinc.com
Subject: Re: [PATCH 06/11] RISC-V: drivers/iommu/riscv: Add command, fault,
page-req queues
On Wed, Jul 19, 2023 at 8:12 PM Nick Kossifidis <mick@....forth.gr> wrote:
>
> Hello Tomasz,
>
> On 7/19/23 22:33, Tomasz Jeznach wrote:
> > Enables message or wire signal interrupts for PCIe and platforms devices.
> >
>
> The description doesn't match the subject nor the patch content (we
> don't jus enable interrupts, we also init the queues).
>
> > + /* Parse Queue lengts */
> > + ret = of_property_read_u32(pdev->dev.of_node, "cmdq_len", &iommu->cmdq_len);
> > + if (!ret)
> > + dev_info(dev, "command queue length set to %i\n", iommu->cmdq_len);
> > +
> > + ret = of_property_read_u32(pdev->dev.of_node, "fltq_len", &iommu->fltq_len);
> > + if (!ret)
> > + dev_info(dev, "fault/event queue length set to %i\n", iommu->fltq_len);
> > +
> > + ret = of_property_read_u32(pdev->dev.of_node, "priq_len", &iommu->priq_len);
> > + if (!ret)
> > + dev_info(dev, "page request queue length set to %i\n", iommu->priq_len);
> > +
> > dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64));
> >
>
> We need to add those to the device tree binding doc (or throw them away,
> I thought it would be better to have them as part of the device
> desciption than a module parameter).
>
We can add them as an optional fields to DT.
Alternatively, I've been looking into an option to auto-scale CQ/PQ
based on number of attached devices, but this gets trickier for
hot-pluggable systems. I've added module parameters as a bare-minimum,
but still looking for better solutions.
>
> > +static irqreturn_t riscv_iommu_priq_irq_check(int irq, void *data);
> > +static irqreturn_t riscv_iommu_priq_process(int irq, void *data);
> > +
>
> > + case RISCV_IOMMU_PAGE_REQUEST_QUEUE:
> > + q = &iommu->priq;
> > + q->len = sizeof(struct riscv_iommu_pq_record);
> > + count = iommu->priq_len;
> > + irq = iommu->irq_priq;
> > + irq_check = riscv_iommu_priq_irq_check;
> > + irq_process = riscv_iommu_priq_process;
> > + q->qbr = RISCV_IOMMU_REG_PQB;
> > + q->qcr = RISCV_IOMMU_REG_PQCSR;
> > + name = "priq";
> > + break;
>
>
> It makes more sense to add the code for the page request queue in the
> patch that adds ATS/PRI support IMHO. This comment also applies to its
> interrupt handlers below.
>
ack. will do.
>
> > +static inline void riscv_iommu_cmd_inval_set_addr(struct riscv_iommu_command *cmd,
> > + u64 addr)
> > +{
> > + cmd->dword0 |= RISCV_IOMMU_CMD_IOTINVAL_AV;
> > + cmd->dword1 = addr;
> > +}
> > +
>
> This needs to be (addr >> 2) to match the spec, same as in the iofence
> command.
>
oops. Thanks!
> Regards,
> Nick
>
regards,
- Tomasz
Powered by blists - more mailing lists