lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <028734f6-2a72-4509-81e0-7e69bda20253@arm.com>
Date: Tue, 20 Jan 2026 13:20:34 +0000
From: Robin Murphy <robin.murphy@....com>
To: "Aneesh Kumar K.V (Arm)" <aneesh.kumar@...nel.org>,
 linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
 iommu@...ts.linux.dev
Cc: Catalin Marinas <catalin.marinas@....com>, Will Deacon <will@...nel.org>,
 Marek Szyprowski <m.szyprowski@...sung.com>, suzuki.poulose@....com,
 steven.price@....com
Subject: Re: [PATCH] arm64: swiotlb: Don’t shrink default buffer when bounce is forced

On 2026-01-20 7:01 am, Aneesh Kumar K.V (Arm) wrote:
> arm64 reduces the default swiotlb size (for unaligned kmalloc()
> bouncing) when it detects that no swiotlb bouncing is needed.
> 
> If swiotlb bouncing is explicitly forced via the command line
> (swiotlb=force), this heuristic must not apply. Add a swiotlb helper to
> query the forced-bounce state and use it to skip the resize when
> bouncing is forced.

This doesn't appear to be an arm64-specific concern though... Since 
swiotlb_adjust_size() already prevents resizing if the user requests a 
specific size on the command line, it seems logical enough to also not 
reduce the size (but I guess still allow it to be enlarged) there if 
force is requested.

(Although realistically, anyone requesting force is quite likely to want 
to request a larger default size anyway...)

Thanks,
Robin.

> Signed-off-by: Aneesh Kumar K.V (Arm) <aneesh.kumar@...nel.org>
> ---
>   arch/arm64/mm/init.c    | 3 ++-
>   include/linux/swiotlb.h | 7 +++++++
>   kernel/dma/swiotlb.c    | 5 +++++
>   3 files changed, 14 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> index 524d34a0e921..7046241b47b8 100644
> --- a/arch/arm64/mm/init.c
> +++ b/arch/arm64/mm/init.c
> @@ -345,7 +345,8 @@ void __init arch_mm_preinit(void)
>   		flags |= SWIOTLB_FORCE;
>   	}
>   
> -	if (IS_ENABLED(CONFIG_DMA_BOUNCE_UNALIGNED_KMALLOC) && !swiotlb) {
> +	if (IS_ENABLED(CONFIG_DMA_BOUNCE_UNALIGNED_KMALLOC) &&
> +	    !(swiotlb || force_swiotlb_bounce())) {
>   		/*
>   		 * If no bouncing needed for ZONE_DMA, reduce the swiotlb
>   		 * buffer for kmalloc() bouncing to 1MB per 1GB of RAM.
> diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
> index 3dae0f592063..513a93dcbdbc 100644
> --- a/include/linux/swiotlb.h
> +++ b/include/linux/swiotlb.h
> @@ -185,6 +185,7 @@ bool is_swiotlb_active(struct device *dev);
>   void __init swiotlb_adjust_size(unsigned long size);
>   phys_addr_t default_swiotlb_base(void);
>   phys_addr_t default_swiotlb_limit(void);
> +bool force_swiotlb_bounce(void);
>   #else
>   static inline void swiotlb_init(bool addressing_limited, unsigned int flags)
>   {
> @@ -234,6 +235,12 @@ static inline phys_addr_t default_swiotlb_limit(void)
>   {
>   	return 0;
>   }
> +
> +static inline bool force_swiotlb_bounce(void)
> +{
> +	return false;
> +}
> +
>   #endif /* CONFIG_SWIOTLB */
>   
>   phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t phys,
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index 0d37da3d95b6..85e31f228cc9 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -1646,6 +1646,11 @@ phys_addr_t default_swiotlb_base(void)
>   	return io_tlb_default_mem.defpool.start;
>   }
>   
> +bool force_swiotlb_bounce(void)
> +{
> +	return swiotlb_force_bounce;
> +}
> +
>   /**
>    * default_swiotlb_limit() - get the address limit of the default SWIOTLB
>    *


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ