lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aUHkGCEeu7L5uCMR@google.com>
Date: Tue, 16 Dec 2025 22:58:33 +0000
From: Mostafa Saleh <smostafa@...gle.com>
To: Jason Gunthorpe <jgg@...dia.com>
Cc: Nicolin Chen <nicolinc@...dia.com>, will@...nel.org,
	robin.murphy@....com, joro@...tes.org,
	linux-arm-kernel@...ts.infradead.org, iommu@...ts.linux.dev,
	linux-kernel@...r.kernel.org, skolothumtho@...dia.com,
	praan@...gle.com, xueshuai@...ux.alibaba.com
Subject: Re: [PATCH rc v3 1/4] iommu/arm-smmu-v3: Add ignored bits to fix STE
 update sequence

On Mon, Dec 15, 2025 at 08:09:52PM -0400, Jason Gunthorpe wrote:
> On Sun, Dec 14, 2025 at 10:32:35PM +0000, Mostafa Saleh wrote:
> > >   * Figure out if we can do a hitless update of entry to become target. Returns a
> > >   * bit mask where 1 indicates that qword needs to be set disruptively.
> > > @@ -1094,13 +1100,22 @@ static u8 arm_smmu_entry_qword_diff(struct arm_smmu_entry_writer *writer,
> > >  {
> > >  	__le64 target_used[NUM_ENTRY_QWORDS] = {};
> > >  	__le64 cur_used[NUM_ENTRY_QWORDS] = {};
> > > +	__le64 ignored[NUM_ENTRY_QWORDS] = {};
> > 
> > I think we can avoid extra stack allocation for another STE, if we make
> > the function update cur_used directly, but no strong opinion.
> 
> It does more than just mask cur_used, it also adjusts ignored:
> 
> > > +		/*
> > > +		 * Ignored is only used for bits that are used by both entries,
> > > +		 * otherwise it is sequenced according to the unused entry.
> > > +		 */
> > > +		ignored[i] &= target_used[i] & cur_used[i];
> 
> Which also explains this:

I haven't tested that, but I was thinking about (applies on top of the patches)

diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
index 72ba41591fdb..9981eefcf0da 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
@@ -1083,7 +1083,7 @@ void arm_smmu_get_ste_used(const __le64 *ent, __le64 *used_bits)
 EXPORT_SYMBOL_IF_KUNIT(arm_smmu_get_ste_used);

 VISIBLE_IF_KUNIT
-void arm_smmu_get_ste_ignored(__le64 *ignored_bits)
+void arm_smmu_get_ste_ignored(__le64 *used)
 {
 	/*
 	 * MEV does not meaningfully impact the operation of the HW, it only
@@ -1093,17 +1093,14 @@ void arm_smmu_get_ste_ignored(__le64 *ignored_bits)
 	 *
 	 *  Note: Software must expect, and be able to deal with, coalesced
 	 *  fault records even when MEV == 0.
-	 */
-	ignored_bits[1] |= cpu_to_le64(STRTAB_STE_1_MEV);
-
-	/*
+	 *
 	 * EATS is used to reject and control the ATS behavior of the device. If
 	 * we are changing it away from 0 then we already trust the device to
 	 * use ATS properly and we have sequenced the device's ATS enable in PCI
 	 * config space to prevent it from issuing ATS while we are changing
 	 * EATS.
 	 */
-	ignored_bits[1] |= cpu_to_le64(STRTAB_STE_1_EATS);
+	used[1] &= ~cpu_to_le64(STRTAB_STE_1_EATS | STRTAB_STE_1_MEV);
 }
 EXPORT_SYMBOL_IF_KUNIT(arm_smmu_get_ste_ignored);

@@ -1119,22 +1116,15 @@ static u8 arm_smmu_entry_qword_diff(struct arm_smmu_entry_writer *writer,
 {
 	__le64 target_used[NUM_ENTRY_QWORDS] = {};
 	__le64 cur_used[NUM_ENTRY_QWORDS] = {};
-	__le64 ignored[NUM_ENTRY_QWORDS] = {};
 	u8 used_qword_diff = 0;
 	unsigned int i;

 	writer->ops->get_used(entry, cur_used);
 	writer->ops->get_used(target, target_used);
-	if (writer->ops->get_ignored)
-		writer->ops->get_ignored(ignored);
+	if (writer->ops->filter_ignored)
+		writer->ops->filter_ignored(cur_used);

 	for (i = 0; i != NUM_ENTRY_QWORDS; i++) {
-		/*
-		 * Ignored is only used for bits that are used by both entries,
-		 * otherwise it is sequenced according to the unused entry.
-		 */
-		ignored[i] &= target_used[i] & cur_used[i];
-
 		/*
 		 * Check that masks are up to date, the make functions are not
 		 * allowed to set a bit to 1 if the used function doesn't say it
@@ -1142,8 +1132,6 @@ static u8 arm_smmu_entry_qword_diff(struct arm_smmu_entry_writer *writer,
 		 */
 		WARN_ON_ONCE(target[i] & ~target_used[i]);

-		/* Bits can change because they are not currently being used */
-		cur_used[i] &= ~ignored[i];
 		unused_update[i] = (entry[i] & cur_used[i]) |
 				   (target[i] & ~cur_used[i]);
 		/*
@@ -1575,7 +1563,7 @@ static void arm_smmu_ste_writer_sync_entry(struct arm_smmu_entry_writer *writer)
 static const struct arm_smmu_entry_writer_ops arm_smmu_ste_writer_ops = {
 	.sync = arm_smmu_ste_writer_sync_entry,
 	.get_used = arm_smmu_get_ste_used,
-	.get_ignored = arm_smmu_get_ste_ignored,
+	.filter_ignored = arm_smmu_get_ste_ignored,
 };

 static void arm_smmu_write_ste(struct arm_smmu_master *master, u32 sid,
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
index d5f0e5407b9f..97b995974049 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
@@ -900,7 +900,7 @@ struct arm_smmu_entry_writer {

 struct arm_smmu_entry_writer_ops {
 	void (*get_used)(const __le64 *entry, __le64 *used);
-	void (*get_ignored)(__le64 *ignored_bits);
+	void (*filter_ignored)(__le64 *used);
 	void (*sync)(struct arm_smmu_entry_writer *writer);
 };


And we only clear the bits from cur_used, there is no need to for the
other mask ignored (ignored[i] &= target_used[i] & cur_used[i])
- If an ignored bit is not in cur_used it will not impact
  "cur_used[i] &= ~ignored[i]" as it must be already zero
- If an ignored bit is not in target_used, it doesn't really matter,
  we can ignore it anyway, as it is safe to do so.

Anyway, I am not fixated on that though, extra 64B on the stack is not that bad.

> 
> > I have some mixed feelings about this, having get_used(), then get_ignored()
> > with the same bits set seems confusing to me, specially the get_ignored()
> > loops back to update cur_used, which is set from get_used()
> 
> The same bits are set because of the above - we need to know what the
> actual used bits are to decide if we need to rely on the ignored rule
> to do the update.
>  
> > My initial though was just to remove this bit from get_used() + some changes
> > to checks setting bits that are not used would be enough, and the semantics
> > of get_used() can be something as:
> > “Return bits used by the updated translation regime that MUST be observed
> > atomically” and in that case we can ignore things as MEV as it doesn’t
> > impact the translation.
> 
> Aside from the above this would cause problems with the validation
> assertions, so it is not a great idea.

Yes, that's why I didn't like this, it had to hack the validation logic.

Thanks,
Mostafa

> 
> > However, this approach makes it a bit explicit which bits are ignored, if we
> > keep this logic, I think changing the name of get_ignored() might help, to
> > something as "get_allowed_break()" or "get_update_safe()"?
> 
> update_safe sounds good to me
> 
> Jason

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ