[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aTjmA_juSss36e8v@google.com>
Date: Wed, 10 Dec 2025 12:16:19 +0900
From: Will Deacon <will@...nel.org>
To: Jacob Pan <jacob.pan@...ux.microsoft.com>
Cc: linux-kernel@...r.kernel.org,
"iommu@...ts.linux.dev" <iommu@...ts.linux.dev>,
Joerg Roedel <joro@...tes.org>, Mostafa Saleh <smostafa@...gle.com>,
Jason Gunthorpe <jgg@...dia.com>,
Robin Murphy <robin.murphy@....com>,
Nicolin Chen <nicolinc@...dia.com>,
Zhang Yu <zhangyu1@...ux.microsoft.com>,
Jean Philippe-Brucker <jean-philippe@...aro.org>,
Alexander Grest <Alexander.Grest@...rosoft.com>
Subject: Re: [PATCH v5 2/3] iommu/arm-smmu-v3: Fix CMDQ timeout warning
On Mon, Dec 08, 2025 at 01:28:56PM -0800, Jacob Pan wrote:
> @@ -781,12 +771,21 @@ static int arm_smmu_cmdq_issue_cmdlist(struct arm_smmu_device *smmu,
> local_irq_save(flags);
> llq.val = READ_ONCE(cmdq->q.llq.val);
> do {
> + struct arm_smmu_queue_poll qp;
> u64 old;
>
> + /*
> + * Poll without WFE because:
> + * 1) Running out of space should be rare. Power saving is not
> + * an issue.
> + * 2) WFE depends on queue full break events, which occur only
> + * when the queue is full, but here we’re polling for
> + * sufficient space, not just queue full condition.
> + */
I don't think this is reasonable; we should be able to use wfe instead of
polling on hardware that supports it and that is an important power-saving
measure in mobile parts.
If this is really an issue, we could take a spinlock around the
command-queue allocation loop for hardware with small queue sizes relative
to the number of CPUs, but it's not clear to me that we need to do anything
at all. I'm happy with the locking change in patch 3.
If we apply _only_ the locking change in the next patch, does that solve the
reported problem for you?
Will
Powered by blists - more mailing lists