lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20260120070102.182977-1-aneesh.kumar@kernel.org>
Date: Tue, 20 Jan 2026 12:31:02 +0530
From: "Aneesh Kumar K.V (Arm)" <aneesh.kumar@...nel.org>
To: linux-arm-kernel@...ts.infradead.org,
	linux-kernel@...r.kernel.org,
	iommu@...ts.linux.dev
Cc: Catalin Marinas <catalin.marinas@....com>,
	Will Deacon <will@...nel.org>,
	Marek Szyprowski <m.szyprowski@...sung.com>,
	Robin Murphy <robin.murphy@....com>,
	suzuki.poulose@....com,
	steven.price@....com,
	"Aneesh Kumar K.V (Arm)" <aneesh.kumar@...nel.org>
Subject: [PATCH] arm64: swiotlb: Don’t shrink default buffer when bounce is forced

arm64 reduces the default swiotlb size (for unaligned kmalloc()
bouncing) when it detects that no swiotlb bouncing is needed.

If swiotlb bouncing is explicitly forced via the command line
(swiotlb=force), this heuristic must not apply. Add a swiotlb helper to
query the forced-bounce state and use it to skip the resize when
bouncing is forced.

Signed-off-by: Aneesh Kumar K.V (Arm) <aneesh.kumar@...nel.org>
---
 arch/arm64/mm/init.c    | 3 ++-
 include/linux/swiotlb.h | 7 +++++++
 kernel/dma/swiotlb.c    | 5 +++++
 3 files changed, 14 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 524d34a0e921..7046241b47b8 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -345,7 +345,8 @@ void __init arch_mm_preinit(void)
 		flags |= SWIOTLB_FORCE;
 	}
 
-	if (IS_ENABLED(CONFIG_DMA_BOUNCE_UNALIGNED_KMALLOC) && !swiotlb) {
+	if (IS_ENABLED(CONFIG_DMA_BOUNCE_UNALIGNED_KMALLOC) &&
+	    !(swiotlb || force_swiotlb_bounce())) {
 		/*
 		 * If no bouncing needed for ZONE_DMA, reduce the swiotlb
 		 * buffer for kmalloc() bouncing to 1MB per 1GB of RAM.
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 3dae0f592063..513a93dcbdbc 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -185,6 +185,7 @@ bool is_swiotlb_active(struct device *dev);
 void __init swiotlb_adjust_size(unsigned long size);
 phys_addr_t default_swiotlb_base(void);
 phys_addr_t default_swiotlb_limit(void);
+bool force_swiotlb_bounce(void);
 #else
 static inline void swiotlb_init(bool addressing_limited, unsigned int flags)
 {
@@ -234,6 +235,12 @@ static inline phys_addr_t default_swiotlb_limit(void)
 {
 	return 0;
 }
+
+static inline bool force_swiotlb_bounce(void)
+{
+	return false;
+}
+
 #endif /* CONFIG_SWIOTLB */
 
 phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t phys,
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 0d37da3d95b6..85e31f228cc9 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -1646,6 +1646,11 @@ phys_addr_t default_swiotlb_base(void)
 	return io_tlb_default_mem.defpool.start;
 }
 
+bool force_swiotlb_bounce(void)
+{
+	return swiotlb_force_bounce;
+}
+
 /**
  * default_swiotlb_limit() - get the address limit of the default SWIOTLB
  *
-- 
2.43.0


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ