[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAH2o1u6TaQ2PLcKRuSpcqh4Q5qUriimSZ1hmmy=37R2378NCUA@mail.gmail.com>
Date: Fri, 28 Jul 2023 07:18:51 +0200
From: Tomasz Jeznach <tjeznach@...osinc.com>
To: Zong Li <zong.li@...ive.com>
Cc: Nick Kossifidis <mick@....forth.gr>,
Anup Patel <apatel@...tanamicro.com>,
Albert Ou <aou@...s.berkeley.edu>, linux@...osinc.com,
Will Deacon <will@...nel.org>, Joerg Roedel <joro@...tes.org>,
linux-kernel@...r.kernel.org, Sebastien Boeuf <seb@...osinc.com>,
iommu@...ts.linux.dev, Palmer Dabbelt <palmer@...belt.com>,
Paul Walmsley <paul.walmsley@...ive.com>,
linux-riscv@...ts.infradead.org,
Robin Murphy <robin.murphy@....com>
Subject: Re: [PATCH 06/11] RISC-V: drivers/iommu/riscv: Add command, fault,
page-req queues
On Mon, Jul 24, 2023 at 11:47 AM Zong Li <zong.li@...ive.com> wrote:
>
> On Fri, Jul 21, 2023 at 2:00 AM Tomasz Jeznach <tjeznach@...osinc.com> wrote:
> >
> > On Wed, Jul 19, 2023 at 8:12 PM Nick Kossifidis <mick@....forth.gr> wrote:
> > >
> > > Hello Tomasz,
> > >
> > > On 7/19/23 22:33, Tomasz Jeznach wrote:
> > > > Enables message or wire signal interrupts for PCIe and platforms devices.
> > > >
> > >
> > > The description doesn't match the subject nor the patch content (we
> > > don't jus enable interrupts, we also init the queues).
> > >
> > > > + /* Parse Queue lengts */
> > > > + ret = of_property_read_u32(pdev->dev.of_node, "cmdq_len", &iommu->cmdq_len);
> > > > + if (!ret)
> > > > + dev_info(dev, "command queue length set to %i\n", iommu->cmdq_len);
> > > > +
> > > > + ret = of_property_read_u32(pdev->dev.of_node, "fltq_len", &iommu->fltq_len);
> > > > + if (!ret)
> > > > + dev_info(dev, "fault/event queue length set to %i\n", iommu->fltq_len);
> > > > +
> > > > + ret = of_property_read_u32(pdev->dev.of_node, "priq_len", &iommu->priq_len);
> > > > + if (!ret)
> > > > + dev_info(dev, "page request queue length set to %i\n", iommu->priq_len);
> > > > +
> > > > dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64));
> > > >
> > >
> > > We need to add those to the device tree binding doc (or throw them away,
> > > I thought it would be better to have them as part of the device
> > > desciption than a module parameter).
> > >
> >
> > We can add them as an optional fields to DT.
> > Alternatively, I've been looking into an option to auto-scale CQ/PQ
> > based on number of attached devices, but this gets trickier for
> > hot-pluggable systems. I've added module parameters as a bare-minimum,
> > but still looking for better solutions.
> >
> > >
> > > > +static irqreturn_t riscv_iommu_priq_irq_check(int irq, void *data);
> > > > +static irqreturn_t riscv_iommu_priq_process(int irq, void *data);
> > > > +
> > >
> > > > + case RISCV_IOMMU_PAGE_REQUEST_QUEUE:
> > > > + q = &iommu->priq;
> > > > + q->len = sizeof(struct riscv_iommu_pq_record);
> > > > + count = iommu->priq_len;
> > > > + irq = iommu->irq_priq;
> > > > + irq_check = riscv_iommu_priq_irq_check;
> > > > + irq_process = riscv_iommu_priq_process;
> > > > + q->qbr = RISCV_IOMMU_REG_PQB;
> > > > + q->qcr = RISCV_IOMMU_REG_PQCSR;
> > > > + name = "priq";
> > > > + break;
> > >
> > >
> > > It makes more sense to add the code for the page request queue in the
> > > patch that adds ATS/PRI support IMHO. This comment also applies to its
> > > interrupt handlers below.
> > >
> >
> > ack. will do.
> >
> > >
> > > > +static inline void riscv_iommu_cmd_inval_set_addr(struct riscv_iommu_command *cmd,
> > > > + u64 addr)
> > > > +{
> > > > + cmd->dword0 |= RISCV_IOMMU_CMD_IOTINVAL_AV;
> > > > + cmd->dword1 = addr;
> > > > +}
> > > > +
> > >
> > > This needs to be (addr >> 2) to match the spec, same as in the iofence
> > > command.
> > >
> >
> > oops. Thanks!
> >
>
> I think it should be (addr >> 12) according to the spec.
>
My reading of the spec '3.1.1. IOMMU Page-Table cache invalidation commands'
is that it is a 4k page aligned address packed at dword1[61:10], so
effectively shifted by 2 bits.
regards,
- Tomasz
> > > Regards,
> > > Nick
> > >
> >
> > regards,
> > - Tomasz
> >
> > _______________________________________________
> > linux-riscv mailing list
> > linux-riscv@...ts.infradead.org
> > http://lists.infradead.org/mailman/listinfo/linux-riscv
Powered by blists - more mailing lists