[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220209122302.213882-2-ltykernel@gmail.com>
Date: Wed, 9 Feb 2022 07:23:01 -0500
From: Tianyu Lan <ltykernel@...il.com>
To: kys@...rosoft.com, haiyangz@...rosoft.com, sthemmin@...rosoft.com,
wei.liu@...nel.org, decui@...rosoft.com, tglx@...utronix.de,
mingo@...hat.com, bp@...en8.de, dave.hansen@...ux.intel.com,
x86@...nel.org, hpa@...or.com, hch@...radead.org,
m.szyprowski@...sung.com, robin.murphy@....com,
michael.h.kelley@...rosoft.com
Cc: Tianyu Lan <Tianyu.Lan@...rosoft.com>,
iommu@...ts.linux-foundation.org, linux-hyperv@...r.kernel.org,
linux-kernel@...r.kernel.org, vkuznets@...hat.com,
brijesh.singh@....com, konrad.wilk@...cle.com, hch@....de,
parri.andrea@...il.com, thomas.lendacky@....com
Subject: [PATCH V2 1/2] Swiotlb: Add swiotlb_alloc_from_low_pages switch
From: Tianyu Lan <Tianyu.Lan@...rosoft.com>
Hyper-V Isolation VM and AMD SEV VM uses swiotlb bounce buffer to
share memory with hypervisor. Current swiotlb bounce buffer is only
allocated from 0 to ARCH_LOW_ADDRESS_LIMIT which is default to
0xffffffffUL. Isolation VM and AMD SEV VM needs 1G bounce buffer at most.
This will fail when there is not enough memory from 0 to 4G address
space and devices also may use memory above 4G address space as DMA memory.
Expose swiotlb_alloc_from_low_pages and platform mey set it to false when
it's not necessary to limit bounce buffer from 0 to 4G memory.
Signed-off-by: Tianyu Lan <Tianyu.Lan@...rosoft.com>
---
include/linux/swiotlb.h | 1 +
kernel/dma/swiotlb.c | 18 ++++++++++++++++--
2 files changed, 17 insertions(+), 2 deletions(-)
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index f6c3638255d5..2b4f92668bc7 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -39,6 +39,7 @@ enum swiotlb_force {
extern void swiotlb_init(int verbose);
int swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose);
unsigned long swiotlb_size_or_default(void);
+void swiotlb_set_alloc_from_low_pages(bool low);
extern int swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs);
extern int swiotlb_late_init_with_default_size(size_t default_size);
extern void __init swiotlb_update_mem_attributes(void);
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index f1e7ea160b43..62bf8b5cc3e4 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -73,6 +73,8 @@ enum swiotlb_force swiotlb_force;
struct io_tlb_mem io_tlb_default_mem;
+static bool swiotlb_alloc_from_low_pages = true;
+
phys_addr_t swiotlb_unencrypted_base;
/*
@@ -116,6 +118,11 @@ void swiotlb_set_max_segment(unsigned int val)
max_segment = rounddown(val, PAGE_SIZE);
}
+void swiotlb_set_alloc_from_low_pages(bool low)
+{
+ swiotlb_alloc_from_low_pages = low;
+}
+
unsigned long swiotlb_size_or_default(void)
{
return default_nslabs << IO_TLB_SHIFT;
@@ -284,8 +291,15 @@ swiotlb_init(int verbose)
if (swiotlb_force == SWIOTLB_NO_FORCE)
return;
- /* Get IO TLB memory from the low pages */
- tlb = memblock_alloc_low(bytes, PAGE_SIZE);
+ /*
+ * Get IO TLB memory from the low pages if swiotlb_alloc_from_low_pages
+ * is set.
+ */
+ if (swiotlb_alloc_from_low_pages)
+ tlb = memblock_alloc_low(bytes, PAGE_SIZE);
+ else
+ tlb = memblock_alloc(bytes, PAGE_SIZE);
+
if (!tlb)
goto fail;
if (swiotlb_init_with_tbl(tlb, default_nslabs, verbose))
--
2.25.1
Powered by blists - more mailing lists