[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <yq5ah5sfjy2j.fsf@kernel.org>
Date: Wed, 21 Jan 2026 11:40:12 +0530
From: Aneesh Kumar K.V <aneesh.kumar@...nel.org>
To: Robin Murphy <robin.murphy@....com>,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
iommu@...ts.linux.dev
Cc: Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will@...nel.org>,
Marek Szyprowski <m.szyprowski@...sung.com>, suzuki.poulose@....com,
steven.price@....com
Subject: Re: [PATCH] arm64: swiotlb: Don’t shrink
default buffer when bounce
is forced
Robin Murphy <robin.murphy@....com> writes:
> On 2026-01-20 7:01 am, Aneesh Kumar K.V (Arm) wrote:
>> arm64 reduces the default swiotlb size (for unaligned kmalloc()
>> bouncing) when it detects that no swiotlb bouncing is needed.
>>
>> If swiotlb bouncing is explicitly forced via the command line
>> (swiotlb=force), this heuristic must not apply. Add a swiotlb helper to
>> query the forced-bounce state and use it to skip the resize when
>> bouncing is forced.
>
> This doesn't appear to be an arm64-specific concern though... Since
> swiotlb_adjust_size() already prevents resizing if the user requests a
> specific size on the command line, it seems logical enough to also not
> reduce the size (but I guess still allow it to be enlarged) there if
> force is requested.
>
Something like the below? I am wondering whether we are doing more than
what the function name suggests. Not allowing the size to be adjusted
when the kernel parameter specifies a swiotlb size seems fine. However,
I am not sure whether adding the force_bounce check is a good idea. I
only found RISC-V doing a similar size adjustment to arm64. Maybe we can
fix both architectures?
@@ -211,6 +211,8 @@ unsigned long swiotlb_size_or_default(void)
void __init swiotlb_adjust_size(unsigned long size)
{
+ unsigned long nslabs;
+
/*
* If swiotlb parameter has not been specified, give a chance to
* architectures such as those supporting memory encryption to
@@ -220,7 +222,13 @@ void __init swiotlb_adjust_size(unsigned long size)
return;
size = ALIGN(size, IO_TLB_SIZE);
- default_nslabs = ALIGN(size >> IO_TLB_SHIFT, IO_TLB_SEGSIZE);
+ nslabs = ALIGN(size >> IO_TLB_SHIFT, IO_TLB_SEGSIZE);
+ /*
+ * Don't allow to reduce size if we are forcing swiotlb bounce.
+ */
+ if (swiotlb_force_bounce && nslabs < default_nslabs)
+ return;
+ default_nslabs = nslabs;
if (round_up_default_nslabs())
size = default_nslabs << IO_TLB_SHIFT;
pr_info("SWIOTLB bounce buffer size adjusted to %luMB", size >> 20);
Powered by blists - more mailing lists