[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <747f76e1-a5ec-150c-311e-a60396f6f7ab@oracle.com>
Date: Wed, 31 Aug 2022 15:20:37 -0700
From: Dongli Zhang <dongli.zhang@...cle.com>
To: Yu Zhao <yuzhao@...gle.com>, Christoph Hellwig <hch@...radead.org>,
Robin Murphy <robin.murphy@....com>,
Marek Szyprowski <m.szyprowski@...sung.com>
Cc: iommu@...ts.linux.dev, linux-mips@...r.kernel.org,
linux-kernel@...r.kernel.org, kernel test robot <lkp@...el.com>,
Dan Carpenter <dan.carpenter@...cle.com>
Subject: Re: [PATCH v2] Revert "swiotlb: panic if nslabs is too small"
Hi Yu,
As we discussed in the past, the swiotlb panic on purpose because the
mips/cavium-octeon/dma-octeon.c requests to allocate only PAGE_SIZE swiotlb
buffer. This is smaller than IO_TLB_MIN_SLABS.
The below comments mentioned that IO_TLB_MIN_SLABS is the "Minimum IO TLB size
to bother booting with".
56 /*
57 * Minimum IO TLB size to bother booting with. Systems with mainly
58 * 64bit capable cards will only lightly use the swiotlb. If we can't
59 * allocate a contiguous 1MB, we're probably in trouble anyway.
60 */
61 #define IO_TLB_MIN_SLABS ((1<<20) >> IO_TLB_SHIFT)
The arm may create swiotlb conditionally. That is, the swiotlb is not
initialized if (1) CONFIG_ARM_LPAE is not set (line 273), or (2) max_pfn <=
arm_dma_pfn_limit (line 274).
arch/arm/mm/init.c
271 void __init mem_init(void)
272 {
273 #ifdef CONFIG_ARM_LPAE
274 swiotlb_init(max_pfn > arm_dma_pfn_limit, SWIOTLB_VERBOSE);
275 #endif
276
277 set_max_mapnr(pfn_to_page(max_pfn) - mem_map);
On x86, the swiotlb is not initialized if the memory is small (> MAX_DMA32_PFN,
at line 47), or the secure memory is not required.
44 static void __init pci_swiotlb_detect(void)
45 {
46 /* don't initialize swiotlb if iommu=off (no_iommu=1) */
47 if (!no_iommu && max_possible_pfn > MAX_DMA32_PFN)
48 x86_swiotlb_enable = true;
49
50 /*
51 * Set swiotlb to 1 so that bounce buffers are allocated and used for
52 * devices that can't support DMA to encrypted memory.
53 */
54 if (cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT))
55 x86_swiotlb_enable = true;
56
57 /*
58 * Guest with guest memory encryption currently perform all DMA through
59 * bounce buffers as the hypervisor can't access arbitrary VM memory
60 * that is not explicitly shared with it.
61 */
62 if (cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT)) {
63 x86_swiotlb_enable = true;
64 x86_swiotlb_flags |= SWIOTLB_FORCE;
65 }
66 }
Regardless whether the current patch will be reverted, unless there is specific
reason (e.g., those PAGE_SIZE will be used), I do not think it is a good idea to
allocate <IO_TLB_MIN_SLABS swiotlb buffer. I will wait for the suggestion from
the swiotlb maintainer.
Since I do not have a mips environment, I am not able to test if the below makes
any trouble in your situation at arch/mips/cavium-octeon/dma-octeon.c.
@@ -234,6 +234,8 @@ void __init plat_swiotlb_setup(void)
swiotlbsize = 64 * (1<<20);
#endif
- swiotlb_adjust_size(swiotlbsize);
- swiotlb_init(true, SWIOTLB_VERBOSE);
+ if (swiotlbsize != PAGE_SIZE) {
+ swiotlb_adjust_size(swiotlbsize);
+ swiotlb_init(true, SWIOTLB_VERBOSE);
+ }
}
Thank you very much!
Dongli Zhang
On 8/30/22 11:38 PM, Yu Zhao wrote:
> This reverts commit 0bf28fc40d89b1a3e00d1b79473bad4e9ca20ad1.
>
> Reasons:
> 1. new panic()s shouldn't be added [1].
> 2. It does no "cleanup" but breaks MIPS [2].
>
> v2: properly solved the conflict [3] with
> commit 20347fca71a38 ("swiotlb: split up the global swiotlb lock")
> Reported-by: kernel test robot <lkp@...el.com>
> Reported-by: Dan Carpenter <dan.carpenter@...cle.com>
>
> [1] https://urldefense.com/v3/__https://lore.kernel.org/r/CAHk-=wit-DmhMfQErY29JSPjFgebx_Ld*pnerc4J2Ag990WwAA@mail.gmail.com/__;Kw!!ACWV5N9M2RV99hQ!PPVATbHVDT6TZ4sqoj5G6vfAJGPAEz-Lmp9njTsM2PPYPQqCP6aq5RF8FDmrXDlSzxJmTUUSgOW3yjKDtg$
> [2] https://urldefense.com/v3/__https://lore.kernel.org/r/20220820012031.1285979-1-yuzhao@google.com/__;!!ACWV5N9M2RV99hQ!PPVATbHVDT6TZ4sqoj5G6vfAJGPAEz-Lmp9njTsM2PPYPQqCP6aq5RF8FDmrXDlSzxJmTUUSgOXQRsYjKQ$
> [3] https://urldefense.com/v3/__https://lore.kernel.org/r/202208310701.LKr1WDCh-lkp@intel.com/__;!!ACWV5N9M2RV99hQ!PPVATbHVDT6TZ4sqoj5G6vfAJGPAEz-Lmp9njTsM2PPYPQqCP6aq5RF8FDmrXDlSzxJmTUUSgOW_tjcVMA$
>
> Fixes: 0bf28fc40d89b ("swiotlb: panic if nslabs is too small")
> Signed-off-by: Yu Zhao <yuzhao@...gle.com>
> ---
> kernel/dma/swiotlb.c | 6 +-----
> 1 file changed, 1 insertion(+), 5 deletions(-)
>
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index c5a9190b218f..dd8863987e0c 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -326,9 +326,6 @@ void __init swiotlb_init_remap(bool addressing_limit, unsigned int flags,
> swiotlb_adjust_nareas(num_possible_cpus());
>
> nslabs = default_nslabs;
> - if (nslabs < IO_TLB_MIN_SLABS)
> - panic("%s: nslabs = %lu too small\n", __func__, nslabs);
> -
> /*
> * By default allocate the bounce buffer memory from low memory, but
> * allow to pick a location everywhere for hypervisors with guest
> @@ -341,8 +338,7 @@ void __init swiotlb_init_remap(bool addressing_limit, unsigned int flags,
> else
> tlb = memblock_alloc_low(bytes, PAGE_SIZE);
> if (!tlb) {
> - pr_warn("%s: Failed to allocate %zu bytes tlb structure\n",
> - __func__, bytes);
> + pr_warn("%s: failed to allocate tlb structure\n", __func__);
> return;
> }
>
>
> base-commit: dcf8e5633e2e69ad60b730ab5905608b756a032f
>
Powered by blists - more mailing lists