lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20251208212857.13101-2-jacob.pan@linux.microsoft.com>
Date: Mon,  8 Dec 2025 13:28:55 -0800
From: Jacob Pan <jacob.pan@...ux.microsoft.com>
To: linux-kernel@...r.kernel.org,
	"iommu@...ts.linux.dev" <iommu@...ts.linux.dev>,
	Will Deacon <will@...nel.org>,
	Joerg Roedel <joro@...tes.org>,
	Mostafa Saleh <smostafa@...gle.com>,
	Jason Gunthorpe <jgg@...dia.com>,
	Robin Murphy <robin.murphy@....com>,
	Nicolin Chen <nicolinc@...dia.com>
Cc: Jacob Pan <jacob.pan@...ux.microsoft.com>,
	Zhang Yu <zhangyu1@...ux.microsoft.com>,
	Jean Philippe-Brucker <jean-philippe@...aro.org>,
	Alexander Grest <Alexander.Grest@...rosoft.com>
Subject: [PATCH v5 1/3] iommu/arm-smmu-v3: Parameterize wfe for CMDQ polling

When SMMU_IDR0.SEV == 1, the SMMU triggers a WFE wake-up event when a
Command queue becomes non-full and an agent external to the SMMU could
have observed that the queue was previously full. However, WFE is not
always required or available during space polling. Introduce an optional
parameter to control WFE usage.

Signed-off-by: Jacob Pan <jacob.pan@...ux.microsoft.com>
---
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 10 ++++------
 1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
index bf67d9abc901..d637a5dcf48a 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
@@ -191,11 +191,11 @@ static u32 queue_inc_prod_n(struct arm_smmu_ll_queue *q, int n)
 }
 
 static void queue_poll_init(struct arm_smmu_device *smmu,
-			    struct arm_smmu_queue_poll *qp)
+			    struct arm_smmu_queue_poll *qp, bool want_wfe)
 {
 	qp->delay = 1;
 	qp->spin_cnt = 0;
-	qp->wfe = !!(smmu->features & ARM_SMMU_FEAT_SEV);
+	qp->wfe = want_wfe && (!!(smmu->features & ARM_SMMU_FEAT_SEV));
 	qp->timeout = ktime_add_us(ktime_get(), ARM_SMMU_POLL_TIMEOUT_US);
 }
 
@@ -656,13 +656,11 @@ static int __arm_smmu_cmdq_poll_until_msi(struct arm_smmu_device *smmu,
 	struct arm_smmu_queue_poll qp;
 	u32 *cmd = (u32 *)(Q_ENT(&cmdq->q, llq->prod));
 
-	queue_poll_init(smmu, &qp);
-
 	/*
 	 * The MSI won't generate an event, since it's being written back
 	 * into the command queue.
 	 */
-	qp.wfe = false;
+	queue_poll_init(smmu, &qp, false);
 	smp_cond_load_relaxed(cmd, !VAL || (ret = queue_poll(&qp)));
 	llq->cons = ret ? llq->prod : queue_inc_prod_n(llq, 1);
 	return ret;
@@ -680,7 +678,7 @@ static int __arm_smmu_cmdq_poll_until_consumed(struct arm_smmu_device *smmu,
 	u32 prod = llq->prod;
 	int ret = 0;
 
-	queue_poll_init(smmu, &qp);
+	queue_poll_init(smmu, &qp, true);
 	llq->val = READ_ONCE(cmdq->q.llq.val);
 	do {
 		if (queue_consumed(llq, prod))
-- 
2.43.0


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ