[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20251130150624.000053e7@linux.microsoft.com>
Date: Sun, 30 Nov 2025 15:06:24 -0800
From: Jacob Pan <jacob.pan@...ux.microsoft.com>
To: Will Deacon <will@...nel.org>
Cc: linux-kernel@...r.kernel.org, "iommu@...ts.linux.dev"
<iommu@...ts.linux.dev>, Joerg Roedel <joro@...tes.org>, Mostafa Saleh
<smostafa@...gle.com>, Jason Gunthorpe <jgg@...dia.com>, Robin Murphy
<robin.murphy@....com>, Nicolin Chen <nicolinc@...dia.com>, Zhang Yu
<zhangyu1@...ux.microsoft.com>, Jean Philippe-Brucker
<jean-philippe@...aro.org>, Alexander Grest <Alexander.Grest@...rosoft.com>
Subject: Re: [PATCH v4 1/2] iommu/arm-smmu-v3: Fix CMDQ timeout warning
Hi Will,
On Tue, 25 Nov 2025 17:19:16 +0000
Will Deacon <will@...nel.org> wrote:
> On Fri, Nov 14, 2025 at 09:17:17AM -0800, Jacob Pan wrote:
> > While polling for n spaces in the cmdq, the current code instead
> > checks if the queue is full. If the queue is almost full but not
> > enough space (<n), then the CMDQ timeout warning is never triggered
> > even if the polling has exceeded timeout limit.
> >
> > The existing arm_smmu_cmdq_poll_until_not_full() doesn't fit
> > efficiently nor ideally to the only caller
> > arm_smmu_cmdq_issue_cmdlist():
> > - It uses a new timer at every single call, which fails to limit
> > to the preset ARM_SMMU_POLL_TIMEOUT_US per issue.
> > - It has a redundant internal queue_full(), which doesn't detect
> > whether there is a enough space for number of n commands.
> >
> > This patch polls for the availability of exact space instead of
> > full and emit timeout warning accordingly.
> >
> > Fixes: 587e6c10a7ce ("iommu/arm-smmu-v3: Reduce contention during
> > command-queue insertion") Co-developed-by: Yu Zhang
> > <zhangyu1@...ux.microsoft.com> Signed-off-by: Yu Zhang
> > <zhangyu1@...ux.microsoft.com> Signed-off-by: Jacob Pan
> > <jacob.pan@...ux.microsoft.com>
>
> I'm assuming you're seeing problems with an emulated command queue?
> Any chance you could make that bigger?
>
This is not related to queue size, but rather a logic issue when
anytime queue is nearly full.
> > @@ -804,12 +794,13 @@ int arm_smmu_cmdq_issue_cmdlist(struct
> > arm_smmu_device *smmu, local_irq_save(flags);
> > llq.val = READ_ONCE(cmdq->q.llq.val);
> > do {
> > + struct arm_smmu_queue_poll qp;
> > u64 old;
> >
> > + queue_poll_init(smmu, &qp);
> > while (!queue_has_space(&llq, n + sync)) {
> > local_irq_restore(flags);
> > - if
> > (arm_smmu_cmdq_poll_until_not_full(smmu, cmdq, &llq))
> > - dev_err_ratelimited(smmu->dev,
> > "CMDQ timeout\n");
> > + arm_smmu_cmdq_poll(smmu, cmdq, &llq, &qp);
> >
>
> Isn't this broken for wfe-based polling? The SMMU only generates the
> wake-up event when the queue becomes non-full.
I don't see this is a problem since any interrupts such as scheduler
tick can be a break evnt for WFE, no?
I have also tested this with WFE on BM with no issues. HyperV VM does
not support WFE.
Powered by blists - more mailing lists